You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the paper it is mentioned that the Ethernet headers, IP headers and port numbers are removed. But in generating the pre-training data(In get_burst_feature() function) I see that only first 64 bytes of a packet are considered( line: 109) and I don't see a line removing the headers. Am I missing something??
The text was updated successfully, but these errors were encountered:
In the paper it is mentioned that the Ethernet headers, IP headers and port numbers are removed. But in generating the pre-training data(In get_burst_feature() function) I see that only first 64 bytes of a packet are considered( line: 109) and I don't see a line removing the headers. Am I missing something??
Hello, thank you for your interest in our work.
During pre-training, the bias issue does not take effect for this since there is no supervised task associated with the downstream task.
But the pretraining data and finetuning data have different distributions. Doesn't this affect the model's performance?
Since the pre-training phase is not done for a specific scenario task, it is more inclined to obtain a pervasive traffic representation without the distributional effects under supervised learning.
In the paper it is mentioned that the Ethernet headers, IP headers and port numbers are removed. But in generating the pre-training data(In get_burst_feature() function) I see that only first 64 bytes of a packet are considered( line: 109) and I don't see a line removing the headers. Am I missing something??
The text was updated successfully, but these errors were encountered: