Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Juniper admin , firewall & idps logs are not parsed correctly #2643

Open
imsidr opened this issue Nov 19, 2024 · 22 comments
Open

Juniper admin , firewall & idps logs are not parsed correctly #2643

imsidr opened this issue Nov 19, 2024 · 22 comments
Assignees
Labels
bug Something isn't working

Comments

@imsidr
Copy link

imsidr commented Nov 19, 2024

Note: If your issue is not a bug or a feature request, please raise a support ticket through our support portal (Splunk.com > Support > Support Portal). This will help us resolve your issue more efficiently and provide you with better assistance. For more information on how to work with the Splunk Support, please refer to this guide.

Was the issue replicated by support? no
What is the sc4s version ? 3.27.0

Which operating system (including its version) are you using for hosting SC4S? Ubuntu

Which runtime (Docker, Podman, Docker Swarm, BYOE, MicroK8s) are you using for SC4S? docker

Is there a pcap available? If so, would you prefer to attach it to this issue or send it to Splunk support? yes

Is the issue related to the environment of the customer or Software related issue? Software related issue

Is it related to Data loss, please explain ? No
Protocol? Hardware specs?

Last chance index/Fallback index? sc4s

Is the issue related to local customization? yes

Do we have all the default indexes created? yes

Describe the bug
Juniper admin , firewall & idps logs are not parsed correctly

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...' index=firewall or index=juniper*
  2. Click on '....'
  3. Scroll down to '....'
  4. See error
@imsidr
Copy link
Author

imsidr commented Nov 19, 2024

pcap attached to splunk support case # CASE [3621290]

@rjha-splunk rjha-splunk added the bug Something isn't working label Nov 20, 2024
@imsidr
Copy link
Author

imsidr commented Nov 21, 2024

Hi @rjha-splunk @sbylica-splunk any update on this ?

@sbylica-splunk
Copy link
Contributor

Hi @imsidr, looking into it, I will post an update once we have something more.

@sbylica-splunk
Copy link
Contributor

@imsidr so we decided to add parsing logs with an RT_SYSTEM tag, with a sourcetype juniper:legacy. Does that seem good to you?

@imsidr
Copy link
Author

imsidr commented Dec 2, 2024 via email

@sbylica-splunk
Copy link
Contributor

@imsidr yeah, logs with RT_SYSTEM tag would be assigned to a juniper:legacy sourcetype.

@imsidr
Copy link
Author

imsidr commented Dec 2, 2024 via email

@sbylica-splunk
Copy link
Contributor

@imsidr can you attach an example of RT_FLOW log that is not being parsed properly? I don't see any in the pcap file that was attached.
Logs with RT_FLOW tag should be parsed correctly since we have a rule for that.

@imsidr
Copy link
Author

imsidr commented Dec 4, 2024 via email

@sbylica-splunk
Copy link
Contributor

@imsidr Yeah, we can setup a call. What would be a possible time for it?

@imsidr
Copy link
Author

imsidr commented Dec 5, 2024 via email

@sbylica-splunk
Copy link
Contributor

@imsidr today probably not, could we arrange for it tomorrow or next week?

@imsidr
Copy link
Author

imsidr commented Dec 5, 2024 via email

@sbylica-splunk
Copy link
Contributor

Hey @imsidr, so we have two follow-up questions for now:

  • Can we get a .pcap file with an example of logs with RT_FLOW tag, like the one we saw on a meeting with the customer?
  • Were the logs getting parsed earlier? Was there any noticeable trigger point that led to this issue?

@imsidr
Copy link
Author

imsidr commented Dec 11, 2024 via email

@sbylica-splunk
Copy link
Contributor

@imsidr thanks, could you also ask for a screenshot from the Splunk side? We would like to see the message and the sc4s_tags field.

@imsidr
Copy link
Author

imsidr commented Dec 11, 2024 via email

@sbylica-splunk
Copy link
Contributor

Hi @imsidr, maybe I missed it but was there any response to this question?
Were the logs getting parsed earlier? Was there any noticeable trigger point that led to this issue?

Also, a new version of SC4S with the fix for RT_SYSTEM logs was released, can the customer upgrade to this version?
https://github.com/splunk/splunk-connect-for-syslog/releases/tag/v3.33.0

@imsidr
Copy link
Author

imsidr commented Dec 12, 2024 via email

@sbylica-splunk
Copy link
Contributor

Hi @imsidr, So, it was being parsed correctly using the legacy method? Do we have a configuration/description of this method? Ok, let's see if updating fixes this issue, keep me updated.

@imsidr
Copy link
Author

imsidr commented Dec 13, 2024 via email

@imsidr
Copy link
Author

imsidr commented Dec 18, 2024

@sbylica-splunk @sbylica-splunk we are seeing jinja2 module errors after upgrading to v 3.33 , the container is in restarting status , we already have jinja2 module installed , not sure why are we getting this error. Attaching sc4s file in the case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants