diff --git a/colorblindness/index.html b/colorblindness/index.html index 8477536a4..dceb7ade9 100644 --- a/colorblindness/index.html +++ b/colorblindness/index.html @@ -6885,6 +6885,7 @@
  • https://www.joshwcomeau.com/css/make-beautiful-gradients: avoid grey, dull colors when making gradients by using HSL instead of RGB.
  • https://bsago.me/posts/that-annoying-shade-of-blue: Not really color blindness, but discusses human color perception and technology.
  • https://ericportis.com/posts/2024/okay-color-spaces
  • +
  • https://jfly.uni-koeln.de/color: "Color Universal Design (CUD) - How to make figures and presentations that are friendly to Colorblind people"
  • diff --git a/search/search_index.json b/search/search_index.json index 1469b98f1..a9b149acd 100644 --- a/search/search_index.json +++ b/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"About these notes","text":"

    These are notes I've taken on technologies that I have used or would like to use.

    These notes started out some time before 2005 in VoodooPad 2. In December 2005 I discovered that you could self-host MediaWiki, so I moved my content into a private MediaWiki installation. Both VoodooPad and self-hosted MediaWiki worked fine for me, but as my notes became more useful and I wanted to show different sections to people in a way that let them discover useful content, the private nature of my self-hosted MediaWiki installation became problematic. MediaWiki also had the problem of being hosted by a web service, which meant it was not possible to access or edit content when my laptop was offline. I solved this for a while by running MediaWiki in a VM on my laptop, but that meant I couldn't access notes from other computers if my laptop was offline, and it meant I had a VM running at all time just to serve notes, which wasted a lot of resources. In 2015 I decided to move out of MediaWiki into markdown files in git, and in 2016 I began using mkdocs to publish these notes publicly to github pages.

    Since 2016, these notes are rendered from markdown files and published to github-pages using mkdocs gh-deploy. If you have suggestions, please open a github issue. Please do not submit PRs.

    "},{"location":"3d-printing/","title":"3D Printing","text":""},{"location":"3d-printing/#links","title":"Links","text":""},{"location":"3d-printing/#see-also","title":"See Also","text":""},{"location":"airflow/","title":"Airflow","text":"

    \"Airflow is a platform created by the community to programmatically author, schedule and monitor workflows.\" - https://airflow.apache.org/

    "},{"location":"airflow/#links","title":"Links","text":""},{"location":"airport/","title":"Apple Airport","text":"

    Apple Airport hardware was discontinued in November 2016.

    "},{"location":"airport/#using-old-airport-utility-apps-with-new-versions-of-os-x","title":"Using old Airport Utility apps with new versions of OS X","text":"

    Or use the 5.6.1 Utility in Windows? Not sure if this works.

    "},{"location":"amazon/","title":"Amazon","text":"

    Mostly related to the technological offerings of Amazon, not the shopping experience.

    "},{"location":"amazon/#kindle","title":"Kindle","text":""},{"location":"amazon/#aws","title":"AWS","text":"

    \"Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers.\" - https://aws.amazon.com/ec2/

    "},{"location":"amazon/#cloudformation","title":"Cloudformation","text":"

    cloudformation has its own notes page.

    "},{"location":"amazon/#links","title":"Links","text":""},{"location":"amazon/#tips","title":"Tips","text":""},{"location":"amazon/#determine-if-you-are-on-an-ec2-instance","title":"Determine if you are on an EC2 instance","text":"
    grep -i '^ec2' /sys/hypervisor/uuid\n
    "},{"location":"amazon/#reformat-accesskeyscsv-into-awscredentials-format","title":"Reformat accessKeys.csv into .aws/credentials format","text":"
    awk -F, 'BEGIN { print \"[temp_name]\" ; } !/Access/ {print \"aws_access_key_id = \"$1\"\\naws_secret_access_key = \"$2}' ~/Downloads/accessKeys.csv\n
    "},{"location":"amazon/#force-reset-mfa-credentials","title":"Force reset mfa credentials","text":"

    https://github.com/broamski/aws-mfa

    aws-mfa --device arn:aws:iam::$UID:mfa/$USER --force\n
    "},{"location":"amazon/#create-eks-cluster-from-cli","title":"Create eks cluster from cli","text":"

    https://github.com/weaveworks/eksctl

    eksctl create cluster\n
    "},{"location":"amazon/#get-eks-cluster-config","title":"Get eks cluster config","text":"
    # find your cluster name\naws eks list-clusters | jq -r '.clusters[]'\n\n# configure the current KUBECONFIG for the given cluster\naws eks update-kubeconfig --name the_cluster_name\n
    "},{"location":"amazon/#see-also","title":"See Also","text":""},{"location":"ansible/","title":"Ansible","text":""},{"location":"ansible/#modules","title":"Modules","text":""},{"location":"ansible/#see-also","title":"See also","text":""},{"location":"ansible/#examples","title":"Examples","text":""},{"location":"ansible/#generate-a-copy-block-for-a-given-file","title":"Generate a copy block for a given file","text":"

    Not perfect because the output is json, but json is yaml and easy enough to fix up quickly.

    ## stat -c '{\"copy\": {\"src\": \"SOURCE_FILE_NAME\", \"dest\": \"%n\", \"mode\": \"0%a\", \"owner\": \"%U\", \"group\": \"%G\"}}' /etc/logrotate.d/backup | jq .\n{\n  \"copy\": {\n    \"src\": \"SOURCE_FILE_NAME\",\n    \"dest\": \"/etc/logrotate.d/backup\",\n    \"mode\": \"0644\",\n    \"owner\": \"root\",\n    \"group\": \"root\"\n  }\n}\n
    "},{"location":"ansible/#show-a-list-of-installed-modules","title":"Show a list of installed modules","text":"
    ansible-doc --list\n
    "},{"location":"ansible/#run-a-playbook-and-prompt-for-sudo-password","title":"Run a playbook and prompt for sudo password","text":"
    ansible-playbook --ask-become-pass -i inventory/hosts.yaml create_users.yaml\n
    "},{"location":"ansible/#run-an-ad-hoc-command","title":"Run an ad-hoc command","text":"

    You can run one-off ad-hoc commands by passing a module and args to the module.

    ansible localhost \\\n  -m get_url \\\n  -a \"mode=755\n    url=https://github.com/bcicen/ctop/releases/download/v0.7.1/ctop-0.7.1-linux-amd64\n    dest=/usr/local/bin/ctop\n    checksum=sha256:38cfd92618ba2d92e0e1262c0c43d7690074b4b8dc77844b654f8e565166b577\n    owner=root\n    group=root\"\n
    "},{"location":"ansible/#validate-and-inspect-your-inventory-file","title":"Validate and inspect your inventory file","text":"

    This command parses your inventory and group_vars and outputs a json data structure if no syntax faults are found.

    ansible-inventory -i inventory/hosts.yml --list\n
    "},{"location":"ansible/#use-arbitrary-groups-in-static-inventory-file","title":"Use arbitrary groups in static inventory file","text":"
    $ nl -w 2 -s ' ' -ba inventory/example.yml\n 1 all:\n 2   hosts:\n 3     client:\n 4       ansible_host: 192.168.1.2\n 5     server:\n 6       ansible_host: 192.168.2.3\n 7\n 8 linux:\n 9   hosts:\n10     server:\n11\n12 windows:\n13   hosts:\n14     client:\n15\n16 california:\n17   hosts:\n18     client:\n19     server:\n$ ansible-inventory -i inventory/example.yml --graph\n@all:\n  |--@california:\n  |  |--client\n  |  |--server\n  |--@linux:\n  |  |--server\n  |--@windows:\n  |  |--client\n
    "},{"location":"ansible/#merge-multiple-inventory-files","title":"Merge multiple inventory files","text":"

    The below example gives higher precedence to the later files.

    ## cat foo.yml\nall:\n  hosts:\n    client:\n      ansible_host: 192.168.1.2\n      service_hostname: hostname-from-file-1\n    server:\n      ansible_host: 192.168.2.3\n      file_number: one\n\n## cat bar.yml\nall:\n  hosts:\n    client:\n      ansible_host: 10.1.2.3\n    server:\n      ansible_host: 10.2.3.4\n      file_number: two\n\n## ansible-inventory -i foo.yml -i bar.yml --list | json-to-yaml.py\n_meta:\n  hostvars:\n    client:\n      ansible_host: 10.1.2.3\n      service_hostname: hostname-from-file-1\n    server:\n      ansible_host: 10.2.3.4\n      file_number: two\nall:\n  children:\n  - ungrouped\nungrouped:\n  hosts:\n  - client\n  - server\n
    "},{"location":"ansible/#show-all-resolved-variables-for-a-given-inventory-host","title":"Show all resolved variables for a given inventory host","text":"

    This wisll show all host vars, including variables resolved from all the different variable locations.

    ansible -i inventory target_hostname -m debug -a \"var=hostvars[inventory_hostname]\"\n
    "},{"location":"ansible/#gather-all-facts-and-save-them-to-files","title":"Gather all facts and save them to files","text":"

    This will create a directory called facts and save results as one json file per host.

    ansible -i inventory target_group_or_hostname -m gather_facts --tree host_facts\n
    "},{"location":"ansible/#generate-an-deterministic-random-number","title":"Generate an deterministic random number","text":"

    This is similar to the Puppet fqdn_rand() function, which is really useful to splay cron jobs. Splaying cron jobs avoids the thundering herd problem by spreading the jobs out over time with deterministic randomness.

    ---\n## defaults/main.yml\n\ndemo_cron_minute: \"{{ 59 | random(seed=inventory_hostname) }}\"\ndemo_cron_hour: \"{{ 23 | random(seed=inventory_hostname) }}\"\n

    See also: https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html#randomizing-data

    "},{"location":"ansible/#simple-ansible-playbook","title":"Simple ansible playbook","text":"

    This may be useful for testing syntax and experimenting with ansible modules.

    ---\n## playbook.yml\n\n- name: A local play\n  hosts: localhost\n  connection: local\n  gather_facts: no\n  tasks:\n    - name: Run cmd\n      shell: /bin/date\n      register: cmd_out\n\n    - debug:\n        var: cmd_out.stdout\n

    ansible-playbook -i localhost playbook.yml

    Slightly more complicated example:

    ## playbook.yml\n## run with: ansible-playbook -i localhost playbook.yml\n\n- name: A local play\n  hosts: localhost\n  connection: local\n  gather_facts: no\n  vars:\n    region: test_region\n    subnets:\n      - subnet_name: Public_2a\n        subnet_cidr: 192.168.100.0/26\n        subnet_az: \"{{ region }}_a\"\n      - subnet_name: Public_2b\n        subnet_cidr: 192.168.100.64/26\n        subnet_az: \"{{ region }}_b\"\n      - subnet_name: Private_2a\n        subnet_cidr: 192.168.100.128/26\n        subnet_az: \"{{ region }}_a\"\n      - subnet_name: Private_2b\n        subnet_cidr: 192.168.100.192/26\n        subnet_az: \"{{ region }}_b\"\n\n  tasks:\n    - name: Run cmd\n      shell: echo \"{{ item.subnet_name }} {{ item.subnet_cidr }} {{ item.subnet_az }}\"\n      register: cmd_out\n      loop: \"{{ subnets }}\"\n\n    - debug:\n        var: cmd_out\n
    "},{"location":"ansible/#get-a-list-of-failed-hosts","title":"Get a list of failed hosts","text":"
    {{ ansible_play_hosts_all | difference(ansible_play_hosts) }}\n
    "},{"location":"ansible/#links","title":"Links","text":""},{"location":"apfs/","title":"Apple APFS","text":"

    A lot of notes here are as of macOS 10.13, and don't apply specifically to any other devices that run APFS.

    APFS got some big bumps in macOS 12, including big snapshot improvements.

    "},{"location":"apfs/#usage","title":"Usage","text":"
    $ diskutil apfs\n2017-11-04 18:23:55-0700\nUsage:  diskutil [quiet] ap[fs] <verb> <options>\n        where <verb> is as follows:\n\n     list                (Show status of all current APFS Containers)\n     convert             (Nondestructively convert from HFS to APFS)\n     create              (Create a new APFS Container with one APFS Volume)\n     createContainer     (Create a new empty APFS Container)\n     deleteContainer     (Delete an APFS Container and reformat disks to HFS)\n     resizeContainer     (Resize an APFS Container and its disk space usage)\n     addVolume           (Export a new APFS Volume from an APFS Container)\n     deleteVolume        (Remove an APFS Volume from its APFS Container)\n     eraseVolume         (Erase contents of, but keep, an APFS Volume)\n     changeVolumeRole    (Change the Role metadata bits of an APFS Volume)\n     unlockVolume        (Unlock an encrypted APFS Volume which is locked)\n     lockVolume          (Lock an encrypted APFS Volume (diskutil unmount))\n     listCryptoUsers     (List cryptographic users of encrypted APFS Volume)\n     changePassphrase    (Change the passphrase of a cryptographic user)\n     setPassphraseHint   (Set or clear passphrase hint of a cryptographic user)\n     encryptVolume       (Start async encryption of an unencrypted APFS Volume)\n     decryptVolume       (Start async decryption of an encrypted APFS Volume)\n     updatePreboot       (Update the APFS Volume's related APFS Preboot Volume)\n\ndiskutil apfs <verb> with no options will provide help on that verb\n
    "},{"location":"apfs/#file-clones","title":"File clones","text":"

    APFS supports deduplicated file copies, which it calls clonefiles. Copying a file by option-dragging it in Finder creates a clonefile. To create a clonefile on the CLI use cp -c src dst. Creating clonefiless of any size file is instantaneous because no file data is actually being copied. This differs from hard links because if you modify the clone, only the new blocks will be written to disk, and the source of the cloned file will not be modified.

    "},{"location":"apfs/#snapshots","title":"Snapshots","text":"

    Snapshots appear to be tied pretty directly to Time Machine, and do not appear to be general purpose. There appear to be many limitations in how they can be used, and what information you can get about them.

    There was previously a tool called apfs_snapshot but it was removed before macOS 10.13 was released.

    "},{"location":"apfs/#create-a-snapshot","title":"Create a snapshot","text":"

    You cannot choose a name for your snapshot, it is tied to the date the snapshot was taken in the form of YYYY-MM-DD-HHMMSS, or date \"+%Y-%m-%d-%H%M%S\"

    $ sudo tmutil localsnapshot\nNOTE: local snapshots are considered purgeable and may be removed at any time by deleted(8).\nCreated local snapshot with date: 2021-08-23-101843\n
    "},{"location":"apfs/#show-snapshots","title":"Show snapshots","text":"
    $ sudo tmutil listlocalsnapshots /\ncom.apple.TimeMachine.2017-11-01-161748\ncom.apple.TimeMachine.2017-11-02-100755\ncom.apple.TimeMachine.2017-11-03-084837\ncom.apple.TimeMachine.2017-11-04-182813\n
    "},{"location":"apfs/#mount-a-snapshot","title":"Mount a snapshot","text":"

    The easiest way to mount snapshots is to open Time Machine.app and browse backwards in time. This will mount your snapshots at /Volumes/com.apple.TimeMachine.localsnapshots/Backups.backupdb/$HOSTNAME/$SNAPSHOT_DATE/Data or a similar path.

    If you just want to mount a single snapshot, fill in $snapshot_name using one of the lines from tmutil listlocalsnapshots /, then:

    mkdir apfs_snap\nmount_apfs -o nobrowse,ro -s \"$snapshot_name\" /System/Volumes/data \"$PWD/apfs_snap\"\n

    Older version of macOS have a slightly different syntax

    mkdir apfs_snap\nsudo mount_apfs -s \"$snapshot_name\" / \"${PWD}/apfs_snap\"\n
    "},{"location":"apfs/#delete-a-snapshot","title":"Delete a snapshot","text":"

    You can only delete snapshots based off of their date.

    $ sudo tmutil deletelocalsnapshots 2017-11-04-183813\nDeleted local snapshot '2017-11-04-183813'\n
    "},{"location":"apfs/#delete-all-snapshots","title":"Delete all snapshots","text":"
    /usr/bin/tmutil listlocalsnapshots / |\ngrep -oE '2[0-9]{3}-[0-9]{2}-[0-9]{2}-[0-9]{6}'\nwhile read -r snap ; do\n  tmutil deletelocalsnapshots \"${snap##*.}\"\ndone\n
    "},{"location":"apfs/#thin-out-snapshots","title":"Thin out snapshots","text":"

    On the given drive, reclaim the given space by thinning out snapshots. As of tmutil 4.0.0, you cannot use any data unit other than bytes. (EG: 1G or 1GB will not work)

    $ sudo tmutil thinlocalsnapshots / 250000000\nThinned local snapshots:\n2017-11-04-184425\n2017-11-04-184433\n2017-11-04-184440\n
    "},{"location":"apfs/#see-also","title":"See also","text":"
    /System/Library/Filesystems/apfs.fs/Contents/Resources/apfs.util\n/System/Library/Filesystems/apfs.fs/Contents/Resources/apfs_invert\n/System/Library/Filesystems/apfs.fs/Contents/Resources/apfs_preflight_converter\n/System/Library/Filesystems/apfs.fs/Contents/Resources/apfs_stats\n
    "},{"location":"apfs/#links","title":"Links","text":""},{"location":"aptly/","title":"Aptly","text":""},{"location":"aria2/","title":"Aria2","text":"

    \"aria2 is a lightweight multi-protocol & multi-source command-line download utility. It supports HTTP/HTTPS, FTP, SFTP, BitTorrent and Metalink. aria2 can be manipulated via built-in JSON-RPC and XML-RPC interfaces.\" - https://aria2.github.io/

    Of particular interest is the ability to download a single file from multiple sources, even using multiple protocols, to have increased download speed.

    "},{"location":"aria2/#examples","title":"Examples","text":""},{"location":"aria2/#download-a-file-in-place","title":"Download a file in place","text":"

    This command can be canceled and given again to resume the file download.

    ## -x5 Connect once to each server\n## -c Continue a partially downloaded file (HTTP/FTP)\n## --file-allocation=none Do not pre-allocate disk space for the file (begin downloading immediately. see man page for more options.)\n## --max-overall-download-limit=3 (K = 1024, M = 1024K)\n## --max-download-limit=1M per connection speed limits\naria2c -x5 -c --file-allocation=none --max-overall-download-limit=3 --max-download-limit=1M http://example.com/foo.iso\n
    "},{"location":"aria2/#see-also","title":"See Also","text":""},{"location":"arpwatch/","title":"arpwatch","text":"

    \"arpwatch - keep track of ethernet/ip address pairings\" - man arpwatch

    "},{"location":"arpwatch/#examples","title":"Examples","text":""},{"location":"arpwatch/#fork-and-log-to-file-not-to-e-mail","title":"Fork and log to file, not to e-mail","text":"
    arpwatch -Q\ntail -F /var/lib/arpwatch/arp.dat\n
    "},{"location":"atomicparsley/","title":"AtomicParsley","text":"

    AtomicParsley is a lightweight command line program for reading, parsing and setting metadata into MPEG-4 files. This is a functional mp4 equivalent of what i3dv2 is for mp3 files.

    "},{"location":"atomicparsley/#examples","title":"Examples","text":""},{"location":"atomicparsley/#set-metadata-on-multiple-files","title":"Set metadata on multiple files","text":"

    Unfortunately the syntax of this tool requires you to edit one file at a time, so you have to iterate each item of an album using shell loops or xargs or whatever you prefer.

    for file in *.m4a ; do\n  AtomicParsley \"${file}\" --artist \"Various Artists\" ;\ndone ;\n
    "},{"location":"atomicparsley/#remove-personally-identifiable-information-pii-from-files","title":"Remove Personally Identifiable Information (pii) from files","text":"

    Useful if you want to remove your personal info from iTunes Match files.

    for file in *.m4a ; do\n  AtomicParsley \\\n    \"$file\" \\\n    --DeepScan \\\n    --manualAtomRemove \"moov.trak.mdia.minf.stbl.mp4a.pinf\" \\\n    --manualAtomRemove \"moov.udta.meta.ilst.----.name:[iTunMOVI]\" \\\n    --manualAtomRemove \"moov.udta.meta.ilst.apID\" \\\n    --manualAtomRemove \"moov.udta.meta.ilst.atID\" \\\n    --manualAtomRemove \"moov.udta.meta.ilst.cnID\" \\\n    --manualAtomRemove \"moov.udta.meta.ilst.cprt\" \\\n    --manualAtomRemove \"moov.udta.meta.ilst.flvr\" \\\n    --manualAtomRemove \"moov.udta.meta.ilst.geID\" \\\n    --manualAtomRemove \"moov.udta.meta.ilst.plID\" \\\n    --manualAtomRemove \"moov.udta.meta.ilst.purd\" \\\n    --manualAtomRemove \"moov.udta.meta.ilst.rtng\" \\\n    --manualAtomRemove \"moov.udta.meta.ilst.sfID\" \\\n    --manualAtomRemove \"moov.udta.meta.ilst.soal\" \\\n    --manualAtomRemove \"moov.udta.meta.ilst.stik\" \\\n    --manualAtomRemove \"moov.udta.meta.ilst.xid\" \\\ndone\n
    "},{"location":"automotive/","title":"Automotive","text":""},{"location":"automotive/#links","title":"Links","text":""},{"location":"autonomous-vehicles/","title":"Autonomous Vehicle Links","text":""},{"location":"autonomous-vehicles/#terms","title":"Terms","text":""},{"location":"autonomous-vehicles/#autonomy-levels","title":"Autonomy Levels","text":""},{"location":"autonomous-vehicles/#links","title":"Links","text":""},{"location":"avahi/","title":"Avahi","text":"

    The Avahi mDNS/DNS-SD daemon implements Multicast DNS like Apple's Zeroconf architecture (also known as \"Rendezvous\" or \"Bonjour\").

    "},{"location":"avahi/#tips","title":"Tips","text":"

    After installing avahi-daemon it may not start. To fix this you may need to run service messagebus start

    Service types are defined in /usr/share/avahi/service-types

    "},{"location":"avahi/#service-configs","title":"Service configs","text":"

    Correctly formatted and named files in /etc/avahi/services/whatever.service are loaded on the fly, no need to restart avahi-daemon. If your service doesn't immediately show up, check syslog for errors.

    <?xml version=\"1.0\" standalone='no'?><!--*-nxml-*-->\n<!DOCTYPE service-group SYSTEM \"avahi-service.dtd\">\n<service-group>\n  <name replace-wildcards=\"yes\">%h</name>\n  <service>\n    <type>_ssh._tcp</type>\n    <port>22</port>\n  </service>\n  <service>\n    <type>_http._tcp</type>\n    <port>80</port>\n  </service>\n</service-group>\n
    "},{"location":"awk/","title":"awk","text":"

    \"pattern-directed scanning and processing language\" - man awk

    "},{"location":"awk/#examples","title":"Examples","text":"

    Some of these require GNU awk.

    "},{"location":"awk/#print-the-first-column-of-a-file","title":"Print the first column of a file","text":"
    awk '${print $1}' filename.txt\n
    "},{"location":"awk/#print-column-2-if-column-1-matches-a-string","title":"Print column 2 if column 1 matches a string","text":"
    ps aux | awk '$1 == \"root\" {print $2}'\n
    "},{"location":"awk/#pass-in-a-variable-and-value","title":"Pass in a variable and value","text":"
    ps | awk -v host=\"$HOSTNAME\" '{print host,$0}'\n
    "},{"location":"awk/#sort-a-file-by-line-lengths","title":"Sort a file by line lengths","text":"
    awk '{print length, $0}' testfile.txt | sort -n\n
    "},{"location":"awk/#tdl-to-csv","title":"TDL to CSV","text":"
    awk '{gsub(\"\\t\",\"\\\",\\\"\",$0); print;}' | sed 's#^#\"#;s#$#\"#;'\n
    "},{"location":"awk/#print-the-first-column-of-every-other-line","title":"Print the first column of every other line","text":"

    % is the modulus operator, which finds the remainder after an integer divide.

    awk 'NR % 2 == 0 { print $1 }'\n
    "},{"location":"awk/#print-only-even-numbered-lines","title":"Print only even numbered lines","text":"
    ls | awk 'NR % 2 == 0 { print $0 }'\n
    "},{"location":"awk/#print-only-odd-numbered-lines","title":"Print only odd numbered lines","text":"
    ls | awk 'NR % 2 != 0 { print $0 }'\n
    "},{"location":"awk/#print-even-numbered-lines-on-the-same-line-before-odd-numbered-lines","title":"Print even numbered lines on the same line before odd numbered lines","text":"
    awk '{if (NR%2==0) { print $0 \" \" prev } else { prev=$0 }}'\n
    "},{"location":"awk/#print-sum-all-the-first-columns-of-each-line-in-a-file","title":"Print sum all the first columns of each line in a file","text":"
    awk '{sum += $1} END {print sum}' filename\n
    "},{"location":"awk/#print-count-sum-and-average-of-the-first-column-of-stdin","title":"Print count, sum, and average of the first column of stdin","text":"
    for _ in {1..100} ; do echo $((RANDOM % 100)) ; done |\nawk '{sum += $1} END {avg = sum/NR ; printf \"Count:   %s\\nSum:     %s\\nAverage: %s\\n\", NR, sum, avg}'\n
    "},{"location":"awk/#split-file-by-recurring-string","title":"Split file by recurring string","text":"

    This will create a new file every time the string \"SERVER\" is found, essentially splitting the file by that string. Concatenating all of the output files would create the original file (potentially adding an extra newline).

    awk '/SERVER/{n++}{print >\"out\" sprintf(\"%02d\", n) \".txt\" }' example.txt\n
    "},{"location":"awk/#show-count-of-syslog-messages-per-minute","title":"Show count of syslog messages per minute","text":"
    awk -F: {'print $1 `\u201c`:`\u201d` $2'} /var/log/messages |uniq -c\n
    "},{"location":"awk/#show-count-of-root-logins-per-minute","title":"Show count of root logins per minute","text":"
    awk -F: '/root/{print $1 \":\" $2}' /var/log/auth.log |uniq -c\n
    "},{"location":"awk/#print-lines-in-ls-where-uid-is-numeric","title":"Print lines in ls where UID is numeric","text":"
    ls -la | awk '$3 ~/[0-9]/{print}'\n
    "},{"location":"awk/#show-only-zfs-snapshots-whose-size-is-zero","title":"Show only zfs snapshots whose size is zero","text":"
    zfs list -t snapshot | awk '$2 == 0'\n
    "},{"location":"awk/#print-a-line-if-the-third-field-does-not-match-a-regex","title":"Print a line if the third field does not match a regex","text":"
    tcpdump -r ops1prod-syn.cap | sort -k2 | awk '$3 !~ /ztmis.prod/ { print }'\n
    "},{"location":"awk/#show-500-errors-in-a-standard-apache-access-log","title":"Show 500 errors in a standard apache access log","text":"
    awk '$9 ~ /5[0-9][0-9]/' www_zoosk_access.log\n
    "},{"location":"awk/#show-total-rss-and-vsz-count-for-all-cronolog-processes","title":"Show total rss and vsz count for all cronolog processes","text":"
    ps aux |\n  grep -i cronolo[g] |\n  awk '{vsz += $5; rss += $6} END {print \"vsz total = \"vsz ; print \"rss total = \"rss}'\n
    "},{"location":"awk/#get-ipv4-address-on-bsdosx","title":"Get IPv4 address on BSD/OSX","text":"
    ifconfig | awk '$1 == \"inet\" && $2 != \"127.0.0.1\" {print $2}'\n
    "},{"location":"awk/#get-ipv6-address-on-bsdosx","title":"Get IPv6 address on BSD/OSX","text":"
    ifconfig | awk '$1 == \"inet6\" && $2 !~ \"::1|.*lo\" {print $2}'\n
    "},{"location":"awk/#print-the-last-element","title":"Print the last element","text":"
    ls -la | awk -F\" \" '{print $NF}'\n
    "},{"location":"awk/#print-2nd-to-last-element","title":"Print 2nd to last element","text":"
    ls -la | awk -F\" \" '{print $(NF - 1)}'\n
    "},{"location":"awk/#print-the-previous-line-on-string-match","title":"Print the previous line on string match","text":"

    This works by storing the previous line. If the current line matches the regex, the previous line is printed from the stored value.

    $ awk '/32 host/ { print previous_line } {previous_line=$0}' /proc/net/fib_trie | column -t | sort -u\n|--  10.134.243.137\n|--  127.0.0.1\n|--  169.50.9.172\n
    "},{"location":"awk/#add-content-to-line-1-if-there-is-no-match","title":"Add content to line 1 if there is no match","text":"

    This adds a yaml document separator to the beginning of all yaml files in the current directory only if it does not already have one.

    tempfile=$(mktemp)\nfor file in ./*.yaml ; do\n  awk 'NR == 1 && $0 != \"---\" {print \"---\"} {print}' \"${file}\" > \"${tempfile}\" \\\n  && mv \"${tempfile}\" \"${file}\"\ndone\n
    "},{"location":"awk/#show-all-docker-images-in-a-helm-chart-and-their-https-links","title":"Show all docker images in a helm chart and their https links","text":"
    helm template . --set global.baseDomain=foo.com -f /Users/danielh/a/google-environments/prod/cloud/app/config.yaml 2>/dev/null |\nawk '/image: / {match($2, /(([^\"]*):[^\"]*)/, a) ; printf \"https://%s %s\\n\", a[2], a[1] ;}' |\nsort -u |\ncolumn -t\n

    A less complicated awk form of this that uses other shell commands would be

    helm template . --set global.baseDomain=foo.com -f /Users/danielh/a/google-environments/prod/cloud/app/config.yaml 2>/dev/null |\ngrep 'image: ' |\nawk '{print $2}' |\nsed 's/\"//g' |\nsed 's/\\(\\(.*\\):.*\\)/https:\\/\\/\\2 \\1/' |\nsort -u |\ncolumn -t\n

    So it really depends on where you want to put your complications, how performant you want to be, and how readable you want it to be. These both produce identical output, but some people find it easier to read shorter commands with simpler syntaxes, which is great for maintainability when performance is not an issue.

    https://quay.io/astronomer/ap-alertmanager  quay.io/astronomer/ap-alertmanager:0.23.0\nhttps://quay.io/astronomer/ap-astro-ui      quay.io/astronomer/ap-astro-ui:0.25.4\nhttps://quay.io/astronomer/ap-base          quay.io/astronomer/ap-base:3.14.2\nhttps://quay.io/astronomer/ap-cli-install   quay.io/astronomer/ap-cli-install:0.25.2\n...snip...\n
    "},{"location":"awk/#show-a-list-of-dns-hostname-queries-with-domain-stripped-sorted-by-hostname-length","title":"Show a list of dns hostname queries with domain stripped, sorted by hostname length","text":"

    This samples 100k dns queries, strips off all the domain names in the queried hostname, and prints the length of that first component of the FQDN (the bare hostname) along with the bare hostname itself, and shows the longest 25 entries.

    tcpdump -c 100000 -l -n -e dst port 53 |\nawk '$14 == \"A?\" {gsub(/\\..*/, \"\", $15) ; print(length($15), $15) ; fflush(\"/dev/stdout\") ;}' |\nsort -u |\nsort -n |\ntail -n 25\n

    Run this on your kube-dns nodes to see how close you're getting to the 63 character limit. You will never see errors though, because any name with components that are longer than 63 characters are not sent over the wire, so you'll need to check your logs for those. A good string to search for is \"63 characters\".

    "},{"location":"awk/#see-also","title":"See Also","text":""},{"location":"awless/","title":"awless","text":"

    \"A Mighty CLI for AWS\" - https://github.com/wallix/awless

    "},{"location":"awless/#examples","title":"Examples","text":"

    A lot of these syntax examples can be found by issuing the command, verb and entity but no parameters. Such as awless create stack, which will drop you into a prompt series to complete the necessary and optional parameters.

    "},{"location":"awless/#list-ec2-instances-sorted-by-uptime","title":"List ec2 instances sorted by uptime","text":"
    $ awless list instances --sort=uptime\n|         ID          |    ZONE    |           NAME          |  STATE  |    TYPE    | PUBLIC IP |   PRIVATE IP  | UPTIME \u25b2 | KEYPAIR |\n|---------------------|------------|-------------------------|---------|------------|-----------|---------------|----------|---------|\n| i-050ad501b33c6ad07 | us-west-1a | faruko-nal              | running | m4.xlarge  |           | 172.19.15.172 | 85 mins  | foo-ops |\n| i-5b381e9b          | us-west-1b | planted-collector11.foo | running | m4.xlarge  |           | 172.27.26.159 | 6 days   | foo-ops |\n| i-04ced9880586c009b | us-west-1a | hadoop07.foo            | running | m4.4xlarge |           | 172.27.37.100 | 8 days   | foo-ops |\n| i-0e583dcd3bc2444d8 | us-west-1a | db-na-historical06.foo  | running | m2.4xlarge |           | 172.19.48.79  | 12 days  | foo-ops |\n
    "},{"location":"awless/#sum-the-amount-of-unattached-disks-in-your-environment","title":"Sum the amount of unattached disks in your environment","text":"
    awless list volumes \\\n    --filter state=available \\\n    --format json |\n  jq .[].Size |\n  awk '{sum += $1 ; count += 1 ;} END {print sum \"G in \" count \" volumes\"}'\n
    "},{"location":"awless/#switch-to-a-different-aws-profile","title":"Switch to a different AWS profile","text":"

    This uses the ~/.aws/credentials file for its profiles

    Short way:

    awless switch prod\n

    Long way:

    awless config set aws.profile prod\n
    "},{"location":"awless/#customize-output-columns","title":"Customize output columns","text":"
    awless list instances --columns name,type,launched\n
    "},{"location":"awless/#add-a-user-to-a-group","title":"Add a user to a group","text":"
    awless \\\n  --aws-profile govcloud \\\n  --aws-region us-gov-west-1 \\\n  attach user \\\n  group=SystemAdministrators \\\n  name=SpaceGhost\n
    "},{"location":"awless/#create-an-access-key-for-a-user","title":"Create an access key for a user","text":"

    This creates an access key and saves it in ~/.aws/credentials

    awless \\\n  --aws-profile govcloud \\\n  --aws-region us-gov-west-1 \\\n  create accesskey \\\n  user=SpaceGhost \\\n  save=true\n
    "},{"location":"awless/#create-a-tag","title":"Create a tag","text":"
    awless create tag key=test_tag resource=i-9ba90158 value=true\n
    "},{"location":"awless/#delete-a-tag","title":"Delete a tag","text":"
    awless delete tag key=test_tag_dhoherd resource=i-9ba90158\n
    "},{"location":"awless/#create-an-instance","title":"Create an instance","text":"
    awless create instance \\\n  count=1 \\\n  image=ami-5ab82fa8 \\\n  keypair=ops \\\n  name=new-hostname \\\n  securitygroup=[sg-c4321fd1,sg-c4321cb0] \\\n  subnet=subnet-c4321c33 \\\n  type=t2.medium\n
    "},{"location":"awless/#see-also","title":"See also","text":""},{"location":"aws-cloudformation/","title":"Amazon AWS Cloudformation","text":"

    \"AWS CloudFormation is a service that helps you model and set up your Amazon Web Services resources so that you can spend less time managing those resources and more time focusing on your applications that run in AWS.\" - http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html

    "},{"location":"aws-cloudformation/#links","title":"Links","text":""},{"location":"aws-cloudformation/#examples","title":"Examples","text":""},{"location":"aws-cloudformation/#import-cloudformation-stack-entities-into-datasette","title":"Import cloudformation stack entities into Datasette","text":"
    aws cloudformation list-stack-resources --stack-name \"$STACK_NAME\" --region \"$REGION\"  |\njq -c '.[]' |\nsqlite-utils insert datasette.db stack -\n
    "},{"location":"awscli/","title":"Amazon awscli","text":"

    Official Amazon AWS command-line interface - https://aws.amazon.com/cli

    "},{"location":"awscli/#example-usage","title":"Example usage","text":""},{"location":"awscli/#show-subnets-for-a-particular-region-and-account","title":"Show subnets for a particular region and account","text":"
    aws --profile=dev --region=us-west-2 ec2 describe-subnets\n
    "},{"location":"awscli/#see-also","title":"See Also","text":""},{"location":"backups/","title":"Backups","text":"

    Notes about backing up data.

    "},{"location":"backups/#links","title":"Links","text":""},{"location":"badblocks/","title":"badblocks","text":"

    badblocks is a program to test storage devices for bad blocks. - https://wiki.archlinux.org/index.php/badblocks

    "},{"location":"badblocks/#examples","title":"Examples","text":""},{"location":"badblocks/#destroy-all-data-on-a-disk-while-logging-bad-blocks","title":"Destroy all data on a disk while logging bad blocks","text":"
    ## -v verbose output writes error info to stderr\n## -s show scan progress, including percent complete, time elapsed, and error count\n## -w destructive write test, vs -n (nondestructive read/write test)\n## -b 4096 byte blocks\n## -t random test pattern\n## -o output file containing list of bad blocks, which can be passed back to badblocks, fsck or mke2fs\nbadblocks -v -s -w -b 4096 -t random -o ~/sdc.txt /dev/sdc\n
    "},{"location":"badblocks/#see-also","title":"See also","text":""},{"location":"bash/","title":"GNU bash","text":"

    Bash is one of the most common mainstream unix shells.

    "},{"location":"bash/#tricks-and-usage","title":"Tricks and Usage","text":""},{"location":"bash/#navigating-on-the-command-line","title":"Navigating on the command line","text":"

    The following can be seen by running: stty -a

    "},{"location":"bash/#view-a-list-of-all-commands-etc","title":"View a list of all commands, etc..","text":""},{"location":"bash/#remove-leading-zeroes","title":"Remove leading zeroes","text":"

    This method converts the numbers from base-10 to base-10, which has the side effect of removing leading zeroes. You can also use this to convert from other base systems

    for X in 00{1..20..2} ; do\n  echo \"$X = $(( 10#${X} ))\"\ndone\n

    Or use bc, a CLI calculator...

    for X in {1..50..5} ; do\n  Y=00${X}\n  echo \"${X} with zeroes is ${Y} and removed with bc is $(echo ${Y} | bc)\"\ndone ;\n
    "},{"location":"bash/#print-several-files-side-by-side","title":"Print several files side by side","text":"
    printf \"%s\\n\" {a..z} > alpha.txt\nprintf \"%s\\n\" {1..26} > num.txt\npr -w 10 -t -m alpha.txt num.txt\n

    The following output will be printed:

    a    1\nb    2\nc    3\nd    4\ne    5\nf    6\ng    7\nh    8\ni    9\nj    10\nk    11\nl    12\nm    13\nn    14\no    15\np    16\nq    17\nr    18\ns    19\nt    20\nu    21\nv    22\nw    23\nx    24\ny    25\nz    26\n
    "},{"location":"bash/#convert-base-36-to-decimal","title":"Convert base 36 to decimal","text":"

    This converts the base 36 number z to a decimal value

    echo $((36#z))\n
    "},{"location":"bash/#run-a-command-for-5-seconds-then-kill-it","title":"Run a command for 5 seconds, then kill it","text":"
    ping -f & sleep 5 ; kill %1\n

    Alternatively, use the timeout command if it's available. In macOS this can be installed through brew install coreutils and accessed with gtimeout.

    timeout 300 cmd\n
    "},{"location":"bash/#test-if-a-variable-is-empty","title":"Test if a variable is empty","text":"
    if [[ -z \"$var\" ]]\n
    "},{"location":"bash/#date","title":"Date","text":"

    For date stuff, see date, because it differs by platform.

    "},{"location":"bash/#show-random-statistics","title":"Show RANDOM statistics","text":"
    for X in {0..9999} ; do\n  echo $(($RANDOM % 5)) ;\ndone |\nsort |\nuniq -c\n
    "},{"location":"bash/#named-pipes","title":"named pipes","text":"
    mkfifo baz ; ps aux > baz\n

    then, in another terminal

    cat baz\n
    "},{"location":"bash/#alternate-redirection-outputs","title":"alternate redirection outputs","text":"
    exec 3> /tmp/baz ; ps aux >&3 # sends the output of ps aux to /tmp/baz\n
    "},{"location":"bash/#redirect-all-output-of-a-script-into-a-file","title":"Redirect all output of a script into a file","text":"

    This is not bash specific, but works in bash.

    ##!/usr/bin/env bash\n\nexec >> /tmp/$0.log\nexec 2>&1\n\ndate \"+%F %T%z $0 This is stdout, and will be written to the log\"\ndate \"+%F %T%z $0 This is stderr, and will also be written to the log\"\n
    "},{"location":"bash/#show-size-of-each-users-home-folder","title":"Show size of each user's home folder","text":"
    getent passwd |\nwhile IFS=: read -r user _ uid _ _ home _ ; do\n  if [[ $uid -ge 500 ]] ; then\n    printf \"$user \" ;\n    sudo du -sh $home ;\n  fi ;\ndone\n
    "},{"location":"bash/#previous-commands-args","title":"Previous command's args","text":"
    mkdir temp ; cd !!:*\n

    Be aware of the location of the tokens. For example:

    mkdir -p {foo,bar}/{a,b,c}\nstat !!:*\n

    This creates a problem because you can't stat -p so you must stat -p !!:2*

    "},{"location":"bash/#debug-a-script","title":"Debug a script","text":"

    This will show everything bash is executing

    bash -x scriptname.sh\n

    Or debug with a function:

    function debug {\n  if [ \"${debug:-0}\" -gt 0 ] ; then\n    echo \"$@\" 2>&1\n  fi\n}\n
    "},{"location":"bash/#debug-nested-scripts","title":"Debug nested scripts","text":"
    PS4=\"+(\\${BASH_SOURCE}:\\${LINENO}): \\${FUNCNAME[0]:+\\${FUNCNAME[0]}(): }\" bash -x some-command\n
    "},{"location":"bash/#find-where-all-the-inodes-are","title":"Find where all the inodes are","text":"
    find ~/ -type d -print0 |\nxargs -I %% -0 bash -c \"echo -n %% ; ls -a '%%' | wc -l\" >> ~/inodes.txt\n
    "},{"location":"bash/#build-and-print-an-array","title":"Build and print an array","text":"
    array=(\"one is the first element\");\narray+=(\"two is the second element\" \"three is the third\");\necho \"${array[@]}\"\n

    This is useful for building command line strings. For example, gpsbabel requires each input file to be prepended with -f. The following script takes a list of files and uses a bash array to create a command line in the form of gpsbabel -i gpx -f input_file_1.gpx -f input_file_2.gpx -o gpx -F output.gpx

    ##!/usr/bin/env bash\n\n## Check for at least one argument, print usage if fail\nif [ $# -lt 2 ] ; then\n    echo \"This script merges gpx files and requires at least two gpx files passed as arguments. Output is output.gpx\";\n    echo \"Usage:    $0 <gpx file> <gpx file> [...<gpx file>]\";\n    exit 1;\nfi\n\n## Create an array of arguments to pass to gpsbabel\nargs=();\nfor item in \"$@\" ; do\n    if [ -f \"$item\" ] || [ -h \"$item\" ] ; then\n        args+=( \"-f\" \"$item\" );\n    else\n        echo \"Skipping $item, it's not a file or symlink.\"\n    fi\ndone;\n\n## Verify we have at least two files to work with\nif [ \"${#args[@]}\" -lt 4 ] ; then\n    echo \"We don't have enough actual files to work with. Exiting.\"\n    exit 1\nfi\n\ngpsbabel -i gpx \"${args[@]}\" -o gpx -F output.gpx\n
    "},{"location":"bash/#build-and-print-an-associative-array-dict-hash","title":"Build and print an associative array (dict, hash)","text":"
    declare -A animals=(\n  [\"cow\"]=\"moo\"\n  [\"dog\"]=\"woof woof\"\n  [\"cat\"]=\"meow\"\n) ;\nfor animal in \"${!animals[@]}\" ; do\n  echo \"The $animal says '${animals[$animal]}'\" ;\ndone ;\n
    "},{"location":"bash/#show-permissions-in-rwx-and-octal-format","title":"Show permissions in rwx and octal format","text":"

    Linux:

    stat -c '%A %a %n' filename\n

    OSX:

    stat -f '%A %N' filename\n

    See stat for more stat usage.

    "},{"location":"bash/#find-the-length-of-a-variable","title":"Find the length of a variable","text":"
    echo ${#SHELL}\n
    "},{"location":"bash/#print-all-variables-that-start-with-the-substring-sh","title":"Print all variables that start with the substring SH","text":"
    echo \"${!SH@}\"\n
    "},{"location":"bash/#tertiary-type-variables","title":"Tertiary type variables","text":"
    ${V:-empty} # means \"return the value of the variable V or the string 'empty' if $V isn't set.\n
    "},{"location":"bash/#do-a-command-and-if-it-returns-false-so-some-more-stuff","title":"Do a command, and if it returns false, so some more stuff","text":"
    until command_that_will_fail ; do something_else ; done ;\n
    "},{"location":"bash/#print-two-digit-months","title":"Print two digit months","text":"

    echo {1..12} may not work. If not, use echo $(seq -w 1 12)

    "},{"location":"bash/#get-filename-extension-or-path","title":"Get filename, extension or path","text":"

    Taken from http://mywiki.wooledge.org/BashFAQ/073

    "},{"location":"bash/#rename-files-to-a-sequence-and-change-their-extension-at-the-same-time","title":"Rename files to a sequence and change their extension at the same time","text":"
    ls | while read -r line ; do\n  stub=${line%.*} ;\n  (( i += 1 )) ;\n  mv \"${line}\" \"${i}-${stub}.txt3\" ;\ndone ;\n
    FullPath=/path/to/name4afile-00809.ext   # result:   #   /path/to/name4afile-00809.ext\nFilename=${FullPath##*/}                             #   name4afile-00809.ext\nPathPref=${FullPath%\"$Filename\"}                     #   /path/to/\nFileStub=${Filename%.*}                              #   name4afile-00809\nFileExt=${Filename#\"$FileStub\"}                      #   .ext\n
    "},{"location":"bash/#sort-a-line-by-spaces","title":"Sort a line by spaces","text":"
    s=( whiskey tango foxtrot );\nsorted=$(printf \"%s\\n\"` `${s[@]}|sort);\necho $sorted\n
    "},{"location":"bash/#calculate-the-difference-between-two-dates","title":"Calculate the difference between two dates","text":"
    echo $(( $(gdate +%s -d 20120203) - $(gdate +%s -d 20120115) ))\n
    "},{"location":"bash/#substring-replace-a-variable","title":"substring replace a variable","text":"

    This is not regex, just a simple string replacement.

    ## ${VAR/search/replace} does only the first\n## ${VAR//search/replace} does all replacements\necho \"Paths in your path: ${PATH//:/ }\"\n
    "},{"location":"bash/#subtract-two-from-a-mac-address","title":"Subtract two from a MAC address","text":"
    ## printf -v defines a variable instead of printing to stdout\nprintf -v dec \"%d\" 0x$(echo 00:25:9c:52:1c:2a | sed 's/://g') ;\nlet dec=${dec}-2 ;\nprintf \"%012X\" ${dec} \\\n| sed -E 's/(..)(..)(..)(..)(..)(..)/\\1:\\2:\\3:\\4:\\5:\\6/g'\n
    "},{"location":"bash/#print-the-last-for-chars-of-a-variable","title":"Print the last for chars of a variable","text":""},{"location":"bash/#dereference-a-variable","title":"Dereference a variable","text":"
    $ for var in ${!BASH_V*} ; do echo \"${var}: ${!var}\" ; done ;\nBASH_VERSINFO: 5\nBASH_VERSION: 5.0.7(1)-release\n
    "},{"location":"bash/#print-something-else-if-a-variable-doesnt-exist","title":"Print something else if a variable doesn't exist","text":"

    This can even be recursively done...

    "},{"location":"bash/#print-every-third-number-starting-with-1-and-ending-with-30","title":"Print every third number starting with 1 and ending with 30","text":"

    echo {1..30..3}

    "},{"location":"bash/#print-every-5th-letter-of-the-alphabet","title":"Print every 5th letter of the alphabet","text":"

    echo {a..z..5}

    "},{"location":"bash/#process-all-lines-but-print-out-status-about-what-line-we-are-on-every-nth-line","title":"Process all lines, but print out status about what line we are on every Nth line","text":"

    Sometimes during a series of long-running jobs you want to see the status of where you are at, or at least some indicator that things have not paused. when ctrl-t is not available (and even when it is) this pattern can help you monitor that things are still moving a long.

    N=0\nfind \"/usr/bin\" -type f |\nwhile read -r X ; do\n  N=$((N + 1))\n  [[ \"$((N % 50))\" -eq 0 ]] && date \"+%F %T file number $N $X\" >&2\n  shasum -a 512 \"${X}\" >> ~/usr_bin_shasums.txt\ndone\n

    Example terminal output from the above command, while all shasum output goes into ~/usr_bin_shasums.txt:

    $ find \"/usr/bin\" -type f |\n> while read -r X ; do\n>   N=$((N + 1))\n>   [[ \"$((N % 50))\" -eq 0 ]] && date \"+%F %T file number $N $X\" >&2\n>   shasum -a 512 \"${X}\" >> ~/usr_bin_shasums.txt\n> done\n2018-02-24 15:30:29 file number 50 /usr/bin/toe\n2018-02-24 15:30:30 file number 100 /usr/bin/db_hotbackup\n2018-02-24 15:30:32 file number 150 /usr/bin/host\n2018-02-24 15:30:33 file number 200 /usr/bin/groffer\n2018-02-24 15:30:35 file number 250 /usr/bin/mail\n2018-02-24 15:30:36 file number 300 /usr/bin/dbicadmin\n2018-02-24 15:30:38 file number 350 /usr/bin/fwkpfv\n2018-02-24 15:30:39 file number 400 /usr/bin/tab2space\n
    "},{"location":"bash/#make-a-directory-structure-of-every-combination-of-adjectivenoun","title":"Make a directory structure of every combination of /adjective/noun","text":"

    mkdir -p {red,green,blue}/{fish,bird,flower}

    "},{"location":"bash/#generate-a-zero-padded-random-2-byte-hex-number","title":"Generate a zero padded random 2 byte hex number","text":"

    printf \"%02X\\n\" $((RANDOM % 256))

    "},{"location":"bash/#grep-many-log-files-and-sort-output-by-date","title":"grep many log files and sort output by date","text":"
    sudo grep cron /var/log/* |\nsed 's/:/ /' |\nwhile read file month day hour line ; do\n  date -d \"$month $day $hour\" \"+%F %T%z ${file} ${line}\" ;\ndone |\nsort\n
    "},{"location":"bash/#get-command-line-switches","title":"Get command line switches","text":"

    From the docs

    while getopts p:l:t: opt; do\n  case $opt in\n    p) pages=$OPTARG ;;\n    l) length=$OPTARG ;;\n    t) time=$OPTARG ;;\n  esac\ndone\n\nshift $((OPTIND - 1))\necho \"pages is ${pages}\"\necho \"length is ${length}\"\necho \"time is ${time}\"\necho \"\\$1 is $1\"\necho \"\\$2 is $2\"\n

    Call this script as ./foo.sh -p \"this is p\" -l llll -t this\\ is\\ t foo bar

    "},{"location":"bash/#files","title":"Files","text":"

    These files can change the behavior of bash.

    "},{"location":"bash/#bash_profile","title":".bash_profile","text":"

    ~/.bash_profile is executed every time you log into the system or initiate a login shell. Inclusion of things that write to stdout is allowed here.

    If you want to write scripts that change your interactive shell environment, such as changing your CWD, define functions here instead of using stand-alone scripts.

    "},{"location":"bash/#example-bash_profile","title":"Example .bash_profile","text":"

    The ~/.bash_profile file can be quite long and complicated. The following example is an incomplete sample:

    export EDITOR=/usr/bin/vim\nexport GZIP='-9'\nexport HISTSIZE=5000\nexport HISTTIMEFORMAT='%F %T%z '\nexport PS1=\"\\u@\\h:\\w$ \"\nexport TERM=xterm-256color\nexport TMOUT=\"1800\"  # log out after this many seconds of shell inactivity\n\nalias ll='ls -la'\nalias temp='date_F=$(date +%F) ; mkdir -p ~/temp/$date_F 2>/dev/null ; cd ~/temp/$date_F'\n\nsprunge() { curl -F 'sprunge=<-' http://sprunge.us < \"${1:-/dev/stdin}\"; } # usage: sprunge FILE # or some_command | sprunge\n\n## Don't record some commands\nexport HISTIGNORE=\"&:[ ]*:exit:ls:bg:fg:history:clear\"\n\n## Avoid duplicate entries\nHISTCONTROL=\"erasedups:ignoreboth\"\n\n## Perform file completion in a case insensitive fashion\nbind \"set completion-ignore-case on\"\n
    "},{"location":"bash/#bashrc","title":".bashrc","text":"

    ~/.bashrc is executed every time you open a sub-shell. It should not output any text, otherwise certain things (eg: scp) will fail.

    "},{"location":"bash/#inputrc","title":"~/.inputrc","text":"

    This file defines some bash behaviors. It also affects some other tools.

    ## Ignore case while completing\nset completion-ignore-case on\n
    "},{"location":"bash/#links","title":"Links","text":""},{"location":"bbcp/","title":"bbcp","text":"

    \"Securely and quickly copy data from source to target.\" - https://www.slac.stanford.edu/~abh/bbcp/

    This is a useful tool for copying files. Notably it gets around some bandwidth limitations of nc that I ran into when trying to copy one large file across an 80gbps network.

    "},{"location":"bc/","title":"GNU bc","text":"

    bc is a tool that does math on the CLI.

    "},{"location":"bc/#examples","title":"Examples","text":""},{"location":"bc/#divide-one-number-into-another-and-show-two-decimal-places","title":"Divide one number into another and show two decimal places","text":"

    The scale variable sets the number of significant digits.

    echo \"scale=2 ; 7 / 3\" | bc

    "},{"location":"bc/#convert-decimal-to-hexadecimal","title":"Convert decimal to hexadecimal","text":"

    echo \"obase=16 ; 10\" | bc

    "},{"location":"bc/#convert-hexadecimal-to-binary","title":"Convert hexadecimal to binary","text":"

    echo \"ibase=16 ; obase=2 ; AF\" | bc

    "},{"location":"bc/#subtract-two-from-the-last-octet-of-a-mac-address","title":"Subtract two from the last octet of a MAC address","text":"
    echo 24:b6:fd:ff:ba:31 |\nwhile read -r X ; do\n  echo ${X%??}$(\n    echo \"obase=16 ; $(( 0x${X#*:??:??:??:??:} )) - 2\" |\n      bc |\n      sed 's/^\\(.\\)$/0\\1/' |\n      tr A-Z a-z\n  ) ;\ndone ;\n
    "},{"location":"bind/","title":"BIND","text":"

    BIND, or named, is the most widely used Domain Name System (DNS) software on the Internet.

    "},{"location":"bind/#flush-records","title":"Flush records","text":""},{"location":"bind/#flush-a-single-record","title":"Flush a single record","text":"
    rndc flushname github.com\n
    "},{"location":"bind/#flush-all-records","title":"Flush all records","text":"
    rndc flush\n
    "},{"location":"blkid/","title":"blkid","text":"

    \"The blkid program is the command-line interface to working with the libblkid(3) library. It can determine the type of content (e.g. filesystem or swap) that a block device holds, and also attributes (tokens, NAME=value pairs) from the content metadata (e.g. LABEL or UUID fields). blkid has two main forms of operation: either searching for a device with a specific NAME=value pair, or displaying NAME=value pairs for one or more specified devices.\" - man blkid

    "},{"location":"blkid/#examples","title":"Examples","text":""},{"location":"blkid/#simple-usage","title":"Simple usage","text":"

    Here is the output of blkid on an Ubuntu 16.04 Vagrant box:

    $ blkid\n/dev/sda1: LABEL=\"cloudimg-rootfs\" UUID=\"743b1402-d445-494c-af0b-749040bb33e4\" TYPE=\"ext4\" PARTUUID=\"95a4c157-01\"\n/dev/sdb: UUID=\"2017-12-12-14-38-00-00\" LABEL=\"cidata\" TYPE=\"iso9660\"\n
    "},{"location":"blkid/#see-also","title":"See Also","text":""},{"location":"bluetooth/","title":"bluetooth","text":""},{"location":"bluetooth/#examples","title":"Examples","text":""},{"location":"bluetooth/#linux-software","title":"Linux software","text":""},{"location":"bpf/","title":"bpf","text":"

    \"Linux Socket Filtering (LSF) is derived from the Berkeley Packet Filter. Though there are some distinct differences between the BSD and Linux Kernel filtering, but when we speak of BPF or LSF in Linux context, we mean the very same mechanism of filtering in the Linux kernel.\"

    "},{"location":"c/","title":"C","text":"

    \"C (pronounced like the letter c) is a general-purpose computer programming language. It was created in the 1970s by Dennis Ritchie, and remains very widely used and influential.\" - https://en.wikipedia.org/wiki/C_(programming_language)

    The linux kernel is > 98% C code.

    "},{"location":"c/#links","title":"Links","text":""},{"location":"calico/","title":"calico","text":"

    \"Calico provides secure network connectivity for containers and virtual machine workloads.\" - https://docs.projectcalico.org/v3.1/introduction/

    "},{"location":"calico/#kubernetes-examples","title":"Kubernetes Examples","text":"

    Calico works in several environments, but these examples all apply to Kubernetes.

    "},{"location":"calico/#installation","title":"Installation","text":"

    https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/

    "},{"location":"calico/#show-a-bunch-of-info-about-your-calico-config","title":"Show a bunch of info about your calico config","text":"

    See also https://docs.projectcalico.org/v2.0/reference/calicoctl/resources/

    for X in bgpPeer hostEndpoint ipPool node policy profile workloadEndpoint ; do\n  echo \"=========== $X\"\n  calicoctl get $X 2>/dev/null\ndone\n
    "},{"location":"calico/#links","title":"Links","text":""},{"location":"calver/","title":"CalVer","text":"

    \"CalVer is a software versioning convention that is based on your project's release calendar, instead of arbitrary numbers.\" - https://calver.org/

    "},{"location":"calver/#links","title":"Links","text":""},{"location":"centos/","title":"CentOS Linux","text":"

    \"The CentOS Project is a community-driven free software effort focused on delivering a robust open source ecosystem.\" - https://www.centos.org/

    "},{"location":"centos/#centos-7","title":"CentOS 7","text":""},{"location":"centos/#new-things-in-centos-7","title":"New things in CentOS 7","text":""},{"location":"centos/#initial-setup","title":"Initial setup","text":"

    Set up some base parameters on a fresh instance

    yum install -y bash-completion bc curl git lsof mlocate mutt net-snmp ntpd smartmontools strace sysstat vim wget\nln -sf /usr/share/zoneinfo/America/Los_Angeles /etc/localtime\nntpdate {0..3}.pool.ntp.org\nsystemctl start ntpd\n
    "},{"location":"centos/#centos-6","title":"CentOS 6","text":""},{"location":"centos/#centos-6-initial-setup","title":"CentOS 6 Initial Setup","text":"
    yum install -y ntp\nchkconfig --levels 345 ntpd on && ntpdate time.apple.com && service ntpd start\nyum upgrade -y\nyum install -y arping avahi avahi-tools bc bind-utils curl elinks fping lsof net-snmp man mlocate mutt openssh openssh-clients openssh-server perl-Crypt-SSLeay perl-libwww-perl rsync strace vim wget yum-cron\nln -sf /usr/share/zoneinfo/America/Los_Angeles /etc/localtime\nchkconfig --levels 345 yum-cron on && service yum-cron start\nyum install -y dcfldd nfs-utils smartmontools dmidecode lshw dstat htop iotop\nchkconfig --levels 345 smartd on && service smartd start\n
    "},{"location":"centos/#tweaks-and-tricks","title":"Tweaks and Tricks","text":""},{"location":"centos/#get-past-protected-lib-problems","title":"Get past protected lib problems","text":"

    yum update --setopt=protected_multilib=false --skip-broken

    "},{"location":"centos/#enable-dhcp-hostname-for-dns-resolution","title":"Enable DHCP Hostname for DNS resolution","text":"

    add \"DHCP_HOSTNAME=whatever\" to /etc/sysconfig/network-scripts/ifcfg-eth0

    "},{"location":"centos/#install-os-from-usb","title":"Install OS from USB","text":""},{"location":"centos/#show-installed-repository-keys","title":"Show installed repository keys","text":"

    rpm -q gpg-pubkey --qf '%{name}-%{version}-%{release} --> %{summary}\\n'

    "},{"location":"centos/#dhcp-with-ddns-hostname","title":"DHCP with DDNS hostname","text":"

    Model your /etc/sysconfig/network-scripts/ifcfg-eth0 like this:

    TYPE=Ethernet\nDEVICE=eth0\nONBOOT=yes\nBOOTPROTO=dhcp\n## Without the following line, dhclient will not update /etc/resolv.conf and may not get an IP address at all\nDHCP_HOSTNAME=some_hostname\n