-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to connect after run & Amazon heath checks fail #80
Comments
hi @four43 The original AMI works as expected i am assuming. Just when you take a copy to a new AMI? Many thanks uk-bolly |
Yes, it is issue with AMI's |
I've spotted this issue in my own build. @four43, in the role => tasks => section_4 => cis_4.6.x.yml => "4.6.5 | PATCH | Ensure default user umask is 027 or more restrictive | Set umask for /etc/login.defs pam_umask settings", try comment out "/etc/bashrc" from the "loop". @uk-bolly I don't know what the ultimate root cause is, but excluding that file from the loop allowed me to launch the instance normally. I'm guessing something runs in cloud-init that depends on a loose umask in /etc/bashrc. This is specifically an issue where there are no problems/errors with the Packer AMI build, but launching an instance from the AMI leads to basic systemd service failures. |
Thank you both for the feedback, Having a quick read up it is indeed cloud-init. In order to be compliant that will need to be adjusted, to either skip as you have mentioned or set the permissions back in cloud init and once completed fix it to be compliant. Its not something that we would change as part of the role. It would be a great article once resolved on how to fix it. Kindest uk-bolly |
hi @uk-bolly , your script changes the /etc/fstab entry so when creating the AMI it got crashes because of that and @herman-wong-cf the role => tasks => section_4 => cis_4.6.x.yml => "4.6.5 | PATCH | Ensure default user umask is 027 or more restrictive | Set umask for /etc/login.defs pam_umask settings", try comment out "/etc/bashrc" from the "loop". this one is issue as well, after implementing it cloud-init runs successfully. |
Describe the Issue
After running the playbook I restart the instance and access it. If I take an AMI of the instance and try and run it again however, it won't start properly.
After running:
I can pull logs from the instance that is failing:
Boot Log
Expected Behavior
Instance fully boots without failures
Actual Behavior
See log above in repro steps
Control(s) Affected
What controls are being affected by the issue
I have no idea! I was hoping someone here might have an idea of what it nuking those systemd units.
Environment (please complete the following information):
ansible 2.10.17
python version = 3.9.17 (main, Jun 13 2023, 16:05:09) [GCC 8.3.0]
Additional Notes
Thanks for any insight or ideas!
Possible Solution
Unknown
The text was updated successfully, but these errors were encountered: