- How do I tell my system to tell me about my system: OS, Kernel, Hardware, etc
- Resolve Permission Issues When Using Redirection
- Reload
bash
orzsh
.profile
without restarting shell: - Clear the contents of a file without deleting the file:
- List all directories - not files, just directories:
- Pitfalls of parsing
ls
- Sequential shell command execution:
- Get a date-time stamp for a log:
- String manipulation with bash:
- Testing things in bash:
- The Shell Parameters of bash
- Assign shell command output to a variable in
bash
; a.k.a. command substitution - Know the Difference Between
NULL
and an Empty String - How do I see my environment?
- Shell variables: UPPER case, lower case, or SoMeThInG_eLsE...?
- What do file and directory permissions mean?
- Using
which
to find commands - accurately! - Using your shell command history
- Searching command history
- Access compressed log files easily
- Filename expansion; a.k.a. "globbing"
- Using the default editor
nano
effectively - Some Options with
grep
- Filtering
grep
processes fromgrep
output - Finding pattern matches:
grep
orawk
? - What version of
awk
is available on my Raspberry Pi? - Find what you need in that huge
man
page - Where did I put that file? - it's somewhere in my system
- A useful tool for GPIO hackers:
raspi-gpio
raspi-config
from the command line?- Background, nohup, infinite loops, daemons
- Bluetooth
- Change the modification date/time of a file
- How to deal with "Unix time" when using
date
- Process management using ctrl+z,
fg
,bg
&jobs
- Download a file from GitHub
- Verify a file system is mounted with
findmnt
- before trying to use it! - How to "roll back" an
apt upgrade
(coming soon) - Should I use
scp
, orsftp
? - So you want to remove
rpi-eeprom
package & save 25MB? - How to move or copy a file without accidentally overwriting a destination file
- REFERENCES:
Stored in /proc/cpu
for single-CPU RPi:
$ cat /proc/cpuinfo
processor : 0
model name : ARMv6-compatible processor rev 7 (v6l)
BogoMIPS : 697.95
Features : half thumb fastmult vfp edsp java tls
CPU implementer : 0x41
CPU architecture: 7
CPU variant : 0x0
CPU part : 0xb76
CPU revision : 7
Hardware : BCM2835
Revision : 0010
Serial : 000000003e3ab978
Model : Raspberry Pi Model B Plus Rev 1.2
$ cat /proc/cpuinfo
processor : 0
model name : ARMv7 Processor rev 3 (v7l)
BogoMIPS : 108.00
Features : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm crc32
CPU implementer : 0x41
CPU architecture: 7
CPU variant : 0x0
CPU part : 0xd08
CPU revision : 3
... repeat for processor : 1, processor : 2, processor : 3
Hardware : BCM2711
Revision : b03111
Serial : 100000006cce8fc1
Model : Raspberry Pi 4 Model B Rev 1.1
$cat /proc/cpuinfo
processor : 0
BogoMIPS : 108.00
Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp
CPU implementer : 0x41
CPU architecture: 8
CPU variant : 0x4
CPU part : 0xd0b
CPU revision : 1
... repeat for processors 1, 2 & 3
Hardware : BCM2835
Revision : c04170
Serial : 6b71acd964ee2481
Model : Raspberry Pi 5 Model B Rev 1.0
$ cat /proc/cpuinfo | awk '/Model/'
Model : Raspberry Pi 4 Model B Rev 1.1
$ man uname # see options & other usage info
# RPi B+ (buster)
$ uname -a
Linux raspberrypi1bp 5.10.63+ #1496 Wed Dec 1 15:57:05 GMT 2021 armv6l GNU/Linux
# RPi 3B+ (bullseye)
$ uname -a
Linux raspberrypi3b 5.10.92-v7+ #1514 SMP Mon Jan 17 17:36:39 GMT 2022 armv7l GNU/Linux
# RPi 4B: (buster)
$ uname -a
Linux raspberrypi4b 5.10.63-v7l+ #1496 SMP Wed Dec 1 15:58:56 GMT 2021 armv7l GNU/Linux
This works on RPi OS, but may not work on distros that are not Debian derivates. But if it works, it's useful:
$ man lsb_release # print distribution-specific info; see manual for options, usage
$ lsb_release -a
No LSB modules are available. # note that lsb itself may not be installed by default
Distributor ID: Raspbian
Description: Raspbian GNU/Linux 11 (bullseye)
Release: 11
Codename: bullseye
$ hostnamectl # p/o systemd, see man hostnamectl for options & usage info
Static hostname: raspberrypi3b
Icon name: computer
Machine ID: be49a9402c954d689ba79ffd5f71ad67
Boot ID: 986ab27386444b52bddae1316c5e1ee1
Operating System: Raspbian GNU/Linux 11 (bullseye)
Kernel: Linux 5.10.92-v7+
Architecture: arm
ethtool --show-permaddr eth0 # for the Ethernet adapter
ethtool --show-permaddr wlan0 # for the WiFi adapter
The vcgencmd
tool can report numerous details from the VideoCore GPU. See man vcgencmd
, and the "official documentation" for details. For a list of all available commands under vcgencmd
, do vcgencmd commands
:
- set_logging,
- bootloader_config,
- bootloader_version,
- cache_flush,
- codec_enabled,
- get_mem,
- get_rsts,
- measure_clock,
- measure_temp,
- measure_volts,
- get_hvs_asserts,
- get_config,
- get_throttled,
- pmicrd,
- pmicwr,
- read_ring_osc,
- version,
- readmr,
- otp_dump (which has its own special section in the docs),
- pmic_read_adc,
- power_monitor
Bluetooth info (maybe better off not knowing? see Bluetooth)
$ hciconfig -a
hci0: Type: Primary Bus: UART
BD Address: D8:3A:DD:A7:B2:00 ACL MTU: 1021:8 SCO MTU: 64:1
UP RUNNING
RX bytes:5014 acl:0 sco:0 events:438 errors:0
TX bytes:66795 acl:0 sco:0 commands:438 errors:0
Features: 0xbf 0xfe 0xcf 0xfe 0xdb 0xff 0x7b 0x87
Packet type: DM1 DM3 DM5 DH1 DH3 DH5 HV1 HV2 HV3
Link policy: RSWITCH SNIFF
Link mode: PERIPHERAL ACCEPT
Name: 'raspberrypi5'
Class: 0x000000
Service Classes: Unspecified
Device Class: Miscellaneous,
HCI Version: 5.0 (0x9) Revision: 0x17e
LMP Version: 5.0 (0x9) Subversion: 0x6119
Manufacturer: Cypress Semiconductor (305)
The redirection
operators (>
and >>
) are incredibly useful tools in the shell. But, when they are used to redirect output from a command to a file requiring root
privileges, they can leave a user scratching his head. Consider this example:
$ sudo printf "Houston, we have a problem!" > /etc/issue.net
-bash: /etc/issue.net: Permission denied
Most who encounter this for the first time are baffled... "WTF?! - why does this not work? I can open the file with an editor - I can edit and save... WTF?!"
The problem is obvious once it's explained, but the solutions may vary. The problem in the example above is that there are actually two commands being used: printf
whis is propelled by sudo
, and the redirect >
which is not propelled by sudo
. And of course you don't actually need sudo
to execute a printf
command, but you do need sudo
to write to /etc/issue.net
. What to do? None of the answers are particularly elegant IMHO, but they do work:
-
If you put the example command in a shell script, and run the script with
sudo
, you won't a problem. This due to the fact that every command in the script - including redirects - will run withroot
privileges. Another way to consider the issue is this: It's only an issue when using the command sequence from the shell prompt. Feel better? -
Similar to #1, you can spawn a new sub-shell using the
-c
option to process a command (refman sh
). This is best explained as follows:$ sudo sh -c 'printf "Houston, we have a problem!" > /etc/issue.net' # --OR-- $ sudo bash -c 'echo "Houston, we have a problem!" > /etc/issue.net'
You will find this succeeds when executed from a shell prompt.
-
The final option (for this recipe at least) is to use the
tee
command instead of the redirect:$ printf "Houston, we have a problem!" | sudo tee /etc/issue.net # OR: If you don't want the output to print on your terminal: $ printf "Houston, we have a problem!" | sudo tee /etc/issue.net > /dev/null
If you're interested, this Q&A on SO has much more on this subject. ⋀
There are two user-owned files that control many aspects of the bash
shell's behavior - uh, interactive shells, that is: ~/.profile
& ~/.bashrc
. Likewise for zsh
, the ~/.zprofile
& ~/.zshrc
. There will be occasions when changes to these files will need to be made in the current session - without exiting one shell session, and starting a new one. Examples of such changes are changes to the PATH
, or addition of an alias
.
$ source ~/.profile # use this for bash
$ source ~/.bashrc # "
% source ~/.zprofile # use this for zsh
% source ~/.zshrc # "
# OR ALTERNATIVELY:
$ . ~/.profile # use for bash + see Notes below
$ . ~/.bashrc # "
% . ~/.zprofile # use for zsh + see Notes below
% . ~/.zshrc # "
Note 1: The dot operator;
.
is a synonym forsource
. Also, it's POSIX-compliant (source
is not).
Note 2: Additions and removals from
~/.bashrc
behave differently: If something is removed from~/.bashrc
, this change will not take effect after sourcing~/.bashrc
(i.e.. ~/.bashrc
).For example: Add a function to
~/.bashrc
:function externalip () { curl http://ipecho.net/plain; echo; }
. Now source it with. ~/.profile
. You should see that the function now works in this session. Now remove the function, and then source it again using. ~/.profile
. The function is still available - only restarting (log out & in), or starting a new shell session will remove it. ⋀
$ > somefile.xyz # works in bash
# -OR-
% : > $LOGFILE # works in zsh
# -OR-
$ truncate -s 0 test.txt # any system w/ truncate
# -OR-
$ cp /dev/null somefile.xyz # any system
$ find . -type d # list all dirs in pwd (.)
Note In this context the 'dot'
.
means thepwd
- not the dot operator as in the above example. ⋀
In some cases, you can get away with parsing and/or filtering the output of ls
, and in other cases you cannot. I've spent an inordinate amount of time trying to filter the output of ls
to get only hidden files - or only hidden directories. ls
seems very squishy and unreliable in some instances when trying to get a specific, filtered list... ref Wooledge.
I try to keep discussion on the topics here brief, but don't always succeed. In a sincere effort to avoid verbosity here, I'll close with a few bulleted bits of guidance:
find
is more reliable thanls
for tailored lists of files & folders; learn to use it - esp. in scripting.- Much has been written (e.g.) on using
bash
PATTERNs, and enabling various extensions throughshopt
to filter/parsels
output. AIUI, the "PATTERNs" are not regular expressions, but an extended type of file globbing. I've not found that reliably effective, but I've not put much time & effort into it. ls
has a long form (the-l
option); the default being the short form. I generally favor the long form (me - a data junkie?). To illustrate my squishy claim w.r.t. listing only hidden files, I've found these work:- For the short form:
ls -A | grep '^\.'
; Note the caret^
operator; used to get the line beginning? - For the short form:
ls -d1 -- \.*
; An example of "glob patterns" - For the long form:
ls -Al | grep " \."
; Note the space in the pattern; alternative:\s
⋀
- For the short form:
Sometimes we want to execute a series of commands, but only if all previous commands execute successfully. In this case, we should use &&
to join the commands in the sequence:
cd /home/auser && cp /utilities/backup_home.sh ./ && chown auser ./backup_home.sh
At other times we want to execute a series of commands regardless of whether or not previous commands executed successfully. In that case, we should use ;
to join the commands in the sequence:
cp /home/pi/README /home/auser; rsync -av /home/auser /mnt/BackupDrv/auser_backup/
It's often useful to insert a date-time stamp in a log file, inserted in a string, etc. Easily done w/ date
using command substitution:
echo $(date) >> mydatalog.txt # using `echo` inserts a newline after the date & time
# log entry will be in this format: Tue Mar 24 04:28:31 CDT 2020 + newline
echo $(date -u) # `-u` gives UTC
If you need more control over the format, use printf
w/ date
:
printf '%s' "$(date)" >> mydatalog.txt # no newline is output
# log entry will be in this format: Tue Mar 24 04:28:31 CDT 2020
printf '%s\n' "$(date)" >> mydatalog.txt # newline is output
There are numerous options with the date
command. Check man date
, or peruse this Lifewire article 'How to Display the Date and Time Using Linux Command Line' - it lists all of the format options for displaying the output of date
.
⋀
It's often useful to manipulate string variables in bash. These websites have some examples: website 1; website 2. The Wooledge Wiki is a bit more advanced, and a trove of string manipulation methods available in bash
. Section 10.1 of the Advanced Bash-Scripting Guide is another comprehensive source of information on string manipulation. For example:
$ str1="for everything there is a "
$ str2="reason"
$ str3="season"
$ echo $str1$str2; echo $str1$str3
for everything there is a reason
for everything there is a season
Testing the equality of two strings is a common task in shell scripts. You'll need to watch your step as there are numerous ways to screw this up! Consider a few examples:
$ string1="Anchors aweigh"
$ string2="Anchors Aweigh"
$ if [[ $string1 == "Anchors aweigh" ]]; then echo "equal"; else echo "not equal"; fi
equal
$ if [ "$string1" == "Anchors aweigh" ]; then echo "equal"; else echo "not equal"; fi
equal
$ if [ "$string1" = "Anchors aweigh" ]; then echo "equal"; else echo "not equal"; fi
equal
# but if you forget something; e.g.
$ if [ $string1 == "Anchors aweigh" ]; then echo "equal"; else echo "not equal"; fi
-bash: [: too many arguments
not equal
# BOOM! no quotes "" - you crash and burn :)
$ [ "$string1" = "Anchors aweigh" ] && echo equal || echo not-equal
equal
$ [ "$string1" -eq "Anchors aweigh" ] && echo equal || echo not-equal
-bash: [: Anchors aweigh: integer expression expected
not-equal
# BOOM! `-eq` is for numbers, not strings - you crash and burn :)
$ [ "$string1" = "$string2" ] && echo equal || echo not-equal
not-equal
$ [[ $string1 = $string2 ]] && echo equal || echo not-equal
not-equal
$ [[ ${string1,,} = ${string2,,} ]] && echo equal || echo not-equal
equal
# NOTE! this case-conversion only works in bash v4 & above
So much arcanery here, and limited portability. Here are a list of references peculiar to this one small problem: SO Q&A 1, SO Q&A 2, Linuxize, UL Q&A 1, SO Q&A 3, SO Q&A 4. AFAIK there's no unabridged reference for string manipulation in bash
, but section '10.1. Manipulating Strings' of the 'Advanced Bash-Scripting Guide' comes reasonably close. And the Other Comparison Operators section from Chap 7 of Advanced Bash-Scripting Guide is not to be missed :)
Ever wonder why some test
s use single brackets: [ ]
& other use double brackets: [[ ]]
? Here's a very succinct answer.
⋀
What about null strings?
At the risk of going overboard, we'll cover testing for null strings also. Know these two characters have a special meaning within a test construct for strings:
-z string is null, that is, has zero length.
-n string is not null.
Consider the possibility that a shell variable has not been defined; for example let's imagine a variable named CaptainsOrders
- a variable to which we, perhaps, failed to assign a value through oversight. Now let's further imagine that CaptainsOrders
is to be tested in an if-then-else
construct. If we were careless, we might create that construct as follows:
if [ "$CaptainsOrders" = "Anchors aweigh" ]
then
echo "We sail at dawn tomorrow"
else
echo "We remain in port"
fi
That might be unfortunate - that might get you Court-martialed, and it certainly would cause your superiors to question your competence as a bash
programmer! But having been enlightened by this tutorial, you would be prepared for someone's failure to enter a value for CaptainsOrders
:
$ [ -z "$CaptainsOrders" ] && echo 'Sound the alarm!!!' || echo "Proceed as planned"
Sound the alarm!!!
You might also learn something of the difference between single quotes ''
, and double quotes ""
.
⋀
Sometimes you need the output of a shell command to be persistent; assign it to a variable for use later. This is known as command substitution. Consider the case of a tmp file you've created. Here's how:
$ $ WORKFILE=$(mktemp /tmp/ssh_stats-XXXXX)
$ echo $WORKFILE
/tmp/ssh_stats-BiA5m
Within this session (or script) $WORKFILE
will contain the location of your tmp file. ref
⋀
bash
has two types of paramaters: positional parameters and special parameters. They are the odd-looking variables you may have seen in scripts, such as: $0
, $1
, $@
, $?
, etc. But they come in very handy, and you should learn to use them. The Bash Reference Manual isn't as informative as it could be, but there are better explanations available that include examples: positional parameters, special parameters.
⋀
At least in bash
, a null string is an empty/zero-length string; in other words, there is no difference. In bash
, a string (e.g. my_string
) can be tested to determine if it is a null/empty string as follows:
#!/usr/bin/env bash
# some processing has taken place, and now...
if [ -z "$my_string" ]
then
echo "ERROR: NULL 'my_string'; script execution aborted" 1>&2
exit 1
else
echo "No error - march on"
fi
echo "Completed if-then-else test for shell variable my_string"
A couple of References: Examples from nixCraft, and a Q&A from Linux SE. ⋀
In most distros, both env
and printenv
output the environment in which the command is entered. In other words, the env
/printenv
output in sh
will be different than in zsh
and different in cron
, etc. And as is typical, the output may be piped
to another program, redirected to a file, etc, etc.
For special cases, set
is a bultin that's rather complex (see the documentation). Used with no options, it lists the names and values of all shell variables, environment variables and even functions - huge amount of output.
% printenv
% # OR #
% env # to view in the terminal
% # OR #
% set | less # HUGE output! pipe to less to view in the pager
For purposes of this recipe, "shell variables" refers to variables that are local in scope; i.e. not "environment variables" (REF). I've always used upper-case characters & underscores when I need to create a "shell variable". Yes - I use the same convention for "shell variables" as is used for "environment variables". I adopted this convention years ago because I read somewhere that lower-case variable names could easily be confused with commands. This was very true for me, esp during early learning days. At any rate, it made sense at the time & I've stuck with it for many years.
But "change is the only constant" as the saying goes, and I wanted to verify that my adopted convention remains in bounds. I've been unable to find a standard (e.g. POSIX) that prescribes a convention for "shell variables", but I've found there are certainly differences in opinion. Here's a summary of my research:
- From Stack Overflow, this Q&A has some interesting opinions and discussion - well worth the read IMHO.
- From a HTG post: How to Work with Variables in Bash :
A variable name cannot start with a number, nor can it contain spaces. It can, however, start with an underscore. Apart from that, you can use any mix of upper- and lowercase alphanumeric characters.
- This NEWBEDEV post Correct Bash and shell script variable capitalization contains some good suggestions, including the use of "snake case" (all lowercase and underscores) for "shell variables". The post is well-written, and informative, but rather opinionated. The author refers to something called "internal shell variables" which isn't well-defined, but specifically recommends lower case/snake case for "shell variables". He also refers to a POSIX standards document, but it is soft on the upper vs. lower case conventions.
- This post Bash Variable Name Rules: Legal and Illegal is also quite opinionated, but without reference to anything except what the author refers to as "good practice":
The variable name must be in the upper case as it is considered good practice in bash scripting.
I've found no reliable reference or relevant standard that recommends against the use of the upper-case characters & underscore
convention. As I understand it the opinions favoring lower-case characters & underscore
are based on claims that "this convention avoids accidentally overriding environmental and internal variables"
. However, it is not possible to change an "environment variable" using the assignment operator =
REF 1, REF 2, REF 3. In addition, there's a rather straightforward method of testing a "shell variable" to ensure it is not an "environment variable" using the shell built-in set
command:
# For this example, suppose we consider using MAILCHECK as a shell variable:
$ set | grep MAILCHECK
MAILCHECK=60
# whoops! better try something else:
$ set | grep CHECK_MAIL
$
# OK - that will work
For now, I will remain skeptical that "snake case" or any other case-related convention holds a compelling advantage over others. Do let me know if you feel differently, or find a broadly used standard (e.g. POSIX).
To be clear: The term "shell variables" is ambiguous. I have adopted one definition here that fits my objectives for this recipe, but do not suggest that there is only one definition. In fact, the GNU Bash Reference Manual uses an entirely different definition for Shell Variables. ⋀
File permissions:
r = read the contents of the file
w = modify the file
x = run the file as an executable
Directory permissions:
r = list the contents of the directory, but not 'ls' into it
w = delete or add a file in the directory
x = move into the directory
⋀
For zsh
users: You've installed a package - but where is it? The which
command can help, but there are some things you need to know:
which
relies on a cache to provide its results; this cache may not be timely or current.- To refresh the cache, run
rehash
orhash -r
. - There are subtle differences depending on your shell;
which
is a built-in forzsh
, and a discrete command inbash
In bash
, which
is a stand-alone command instead of a builtin. Consequently hash -r
is not needed to get timely results fromwhich
.
⋀
For those of us who don't have a photographic memory, our shell command history is very useful. Our primary objective in this brief segment is to gain some understanding of how the command history works. Once this is understood, the configuration of command history becomes more clear, and allows us to use it with greater effect - to tailor it for how we work.
The Figure below is intended to show the relationship between the two different mechanisms used by bash
for storing command history. The dashed lines and arrows show the "flow" between the "file history", and the "session histories":
- Each session maintains its own unique history; it contains only commands issued in that session.
- When a session is ended, or its history filled to capacity, its history "flows" into the file history.
- A session history is deleted when the session is closed.
- The file history is an aggregation of all of a user's session histories.
- The file history is a permanent record, typically stored in
~/.bash_history
. - When a new session is launched, commands from the file history "flow in" to fill the session history.
- In summary, command histories "flow" in both directions between the file history & session histories.
There are numerous variables and commands (built-ins) that control the behavior of the command history, and there are numerous guides and recommendations on how to configure the command history. But you must understand how the command history works to make informed decisions about how to configure yours.
T. Laurenson's blog post on bash
history, and his command history configuration script are excellent IMHO. However, my command history configuration is different; I don't need (or want) my session histories merged immediately; I prefer they remain unique for the duration of that session. For me, this makes a command recall quicker and simpler as I tend to use different sessions for different tasks.
The semantics for configuring the bash
command history options are covered in some of the REFERENCES, and here in this section of the bash
manual. If you're just starting with the command history, there may be some benefit to a brief perusal to appreciate the scope of this component of bash
. If your objective is to gain some proficiency, your time will be well-spent in conducting some experiments to see for yourself how a basic set of variables and commands affect command history behavior.
What if you use a shell other than bash
? While some aspects of the command history are shell-dependent, they have more in common than they have differences. An overview of the command history - from a zsh
perspective - is provided in another section of this repo.
⋀
Paragraph 8.2.5 Searching for Commands in the History in the GNU Bash manual is probably the authoritative source for documentation of the search facility. But even they don't have all the tricks! We'll get to that in a moment, but I'd be remiss if I didn't take a moment to point out the value of reading the documentation... in this case, perhaps start with 8.2 Readline Interaction. You'll get more out of the effort.
Anyway - back to command history searching:
Many of you will already know that you can invoke a (reverse) search of your command history by typing control+r (^r
) at the bash
command prompt. You may also know that typing control+g (^g
) will gracefully terminate that search. GNU's Bash manual also points out that a forward search may also be conducted using control+s. But if you've tried the forward search, you may have found that it doesn't work! And so here's the trick alluded to above:
# add the following line(s) to ~/.bashrc:
# to support forward search (control-s)
stty -ixon
The reason control+s doesn't work is that it collides with XON
/XOFF
flow control (e.g. in Konsole). So the solutions are: 1.) bind the forward search to another key, or 2.) simply disable XON
/XOFF
flow control using stty -ixon
. And don't forget to source ~/.bashrc
to load it.
Finally, if you still have questions, I can recommend this blog post from Baeldung on the subject.
⋀
If you ever find yourself rummaging around in /var/log
... Maybe you're 'looking for something, but don't know exactly what'. In the /var/log
file listing, you'll see a sequence of syslog
files (and several others) arranged something like this:
-rw-r----- 1 root adm 3919 Jan 17 01:38 syslog
-rw-r----- 1 root adm 176587 Jan 17 00:00 syslog.1
-rw-r----- 1 root adm 11465 Jan 16 00:00 syslog.2.gz
-rw-r----- 1 root adm 19312 Jan 15 00:00 syslog.3.gz
-rw-r----- 1 root adm 4893 Jan 14 00:00 syslog.4.gz
-rw-r----- 1 root adm 5398 Jan 13 00:00 syslog.5.gz
-rw-r----- 1 root adm 4472 Jan 12 00:00 syslog.6.gz
-rw-r----- 1 root adm 4521 Jan 11 00:00 syslog.7.gz
The .gz
files are compressed with gzip
of course - but how to view the contents? There are some tools to make that job a little easier. zgrep
and zless
are the most useful in my experience, but zdiff
and zcat
are also there if you need them. Note that these "z
" utilities will also handle non-compressed files, but don't be tempted to use them as a substitute since not all options are available in the "z
" version. For example, grep -R
doesn't translate to zgrep
.
As a potentially useful example, consider listing all of the Under-voltage
warnings in /var/log/syslog*
. Note that the syslog*
filename expansion / globbing will get all the syslog files - compressed or uncompressed. Since there may be quite a few, piping them to the less
pager won't clutter your screen:
$ zgrep voltage /var/log/syslog* | less
If you want daily totals of Under-voltage
events, use the -c
option:
$ zgrep -c voltage /var/log/syslog*
/var/log/syslog:0
/var/log/syslog.1:8
/var/log/syslog.2.gz:3
/var/log/syslog.3.gz:5
/var/log/syslog.4.gz:4
/var/log/syslog.5.gz:10
/var/log/syslog.6.gz:0
/var/log/syslog.7.gz:0
Or weekly totals of Under-voltage
events:
$ zgrep -o voltage /var/log/syslog* | wc -l
30
Still more is possible if you care to pipe these results to awk
.
⋀
The astute reader might have noticed the syntax from above:
$ zgrep -c voltage /var/log/syslog*
What does that asterisk (*
) mean; what does it do?
It's one of the more powerful idioms available in bash
, and extremely useful when working with files. Consider the alternatives to instructing bash
to loop through all the files with syslog
in their filename. Read more about its possibilities, and study the examples in the Advanced Bash-Scripting Guide.
⋀
I was writing this section in conjunction with an answer in U&L SE. Why?... I felt the project documentation left something to be desired. A search turned up HTG's "Guide to Nano", and more recently this post: "Getting Started With Nano Text Editor". nano
doesn't change rapidly, but perhaps the timeless method for finding documentation on nano
is to find a descriptive term, and do your own search?
This is all well & good, but the sources above do not answer nano's burning question:
The help screen in nano (^G) lists many options, but using most of them requires one to know what key represents
M
- the "meta key"... what is the "meta key"?
- On macOS,
M
- the "meta key" - is the esc key - On Linux & Windows(?),
M
- the "meta key" - is the Escape key
Another useful item is nano
's configuration file - ~/.nanorc
. Here's what I put in mine:
$ cat ~/.nanorc
set tabsize 4
set tabstospaces
grep
has many variations which makes it useful in many situations. We can't cover them all here (not even close), but just to whet the appetite:
grep
can return TRUE/FALSE:grep -q PATTERN [FILE]
;0
if TRUE,non-zero
if FALSEgrep
can return the matching object only:grep -o PATTERN [FILE]
instead of the entire line- you can
pipe
the output of a command togrep
: `cat somefile.txt | grep 'Christmas' grep
can process aHere String
:grep PATTERN <<< "$VALUE"
, where$VALUE
is expanded & fed togrep
.grep
'sPATTERN
may be a literal string, or a regular expression; e.g. to find IPv4 ADDRESSES in a file:
sudo grep -E -o "([0-9]{1,3}[\.]){3}[0-9]{1,3}" /etc/network/interfaces
NOTE: This is not an exact match for an IP address, only an approximation, and may occasionally return something other than an IP address. An exact match is available here. ⋀
grep
provides a very useful filter in many situations. However, when filtering a list of processes using ps
, grep
introduces an annoying artifact: its output also includes the grep
process that is filtering the output of ps
. This is illustrated in the following example:
$ ps aux | grep cron
root 357 0.0 0.1 7976 2348 ? Ss 16:47 0:00 /usr/sbin/cron -f
pi 1246 0.0 0.0 7348 552 pts/0 S+ 18:46 0:00 grep --color=auto cron
Removal of this artifact can be accomplished in one of two ways:
$ #1: use grep -v grep to filter grep processes
$ ps aux | grep name_of_process | grep -v grep
root 357 0.0 0.1 7976 2348 ? Ss 16:47 0:00 /usr/sbin/cron -f
$ #2: use a regular expression instead of a string for grep's filter
$ ps aux | grep [n]ame_of_process
While researching this piece, I came across this Q&A on Stack Overflow. As I read through the answers, I was surprised that some experienced users answered the question incorrectly! As I write this (Feb 2022), there are at least six (6) answers that are wrong - including one of the most highly voted answers. I can't guess why so many upvoted incorrect answers, but the question is clear: Match two strings in one line with grep?
; confirmed in the body of the question.
But the point here is not to chide for incorrect answers. The SO Q&A serves only to underscore the point that it pays to consider which tool (awk
or grep
in this case) is "best" for the job. "Best" is of course subjective, so here I attempt to illustrate the alternatives by example, and the reader may decide the best answer for himself. Before the example, let's review the mission statements of awk & grep from their man pages:
man grep
: print lines that match patterns; for details see GNU grep online manualman awk
: pattern scanning and processing language; for details see GNU awk/gawk online manual
So - awk
's processing adds significant scope compared to that of grep
. But for the business of pattern matching, it is not necessary to bring all of that additional scope to bear on the problem. Let me explain: An awk
statement has two parts: a pattern, and an associated action. A key feature of awk
is that the action part of a statement may be omitted. This, because in the absence of an explicit action, awk
's default action is print
. Alternatively, awk
's basic function is to search text for patterns; this is grep
's only function.
Finally, know that AWK is a language, awk
is an implementation of that language, and there are several implementations available. Also know that there are far more implementations of grep - which is an acronym - explained here. There are also wide variations in the various grep implementations, as you may notice from reading the previously cited SO Q&A, and many other Q&A on grep usage.
Before beginning with the examples, I'll introduce the following file - used to verify the accuracy of the commands in the examples shown in the table below:
$ cat -n testsearch.txt
1 just a random collection on this line
2 string1 then some more words string2 #BOTH TARGETS
3 string2 blah blah blub blub noodella #ONLY TARGET2
4 #FROM 'Paradise Lost':
5 They, looking back, all the eastern side beheld
6 Of Paradise, so late their happy seat,
7 Waved over by that flaming brand; the gate
8 With dreadful faces thronged and fiery arms.
9 Some natural tears they dropped, but wiped them soon;
10 The world was all before them, where to choose
11 Their place of rest, and Providence their guide.
12 They, hand in hand, with wandering steps and slow,
13 Through Eden took their solitary way.
14 string1 this line contains only one #ONLY TARGET1
And here are the examples. All have been tested using the file testsearch.txt
, on Debian bullseye using GNU grep ver 3.6, and GNU Awk 5.1.0, API: 3.0 (GNU MPFR 4.1.0, GNU MP 6.2.1).
grep |
RES | awk |
RES |
---|---|---|---|
Ex. 1: Print line(s) from the file/stream that contain string1 AND string2 |
|||
Correct Output (RES) is Line #2 Only: "string1 then some more words string2 #BOTH TARGETS" |
|||
grep 'string1' testsearch.txt | grep 'string2' |
Yes | awk '/string1/ && /string2/' testsearch.txt |
Yes |
grep -P '(?=.*string1)(?=.*string2)' testsearch.txt |
Yes | ||
grep 'string1\|string2' testsearch.txt |
No | ||
grep -E "string1|string2" testsearch.txt |
No | ||
grep -e 'string1' -e 'string2' testsearch.txt |
No |
TO BE CONTINUED... ⋀
Know first that (mostly) because RPiOS is a Debian derivative, its default AWK is mawk
. mawk
has been characterized as having only basic features, and being very fast. This seems a reasonable compromise for the RPi; in particular the Zero, and the older RPis. But here's an odd thing: the release date for the mawk
package in buster
was 1996, but the release date for the mawk
package in bullseye
was in Jan, 2020. And so the version included in your system depends on the OS version; i.e. Debian 10/buster
, or Debian 11/bullseye
. You can get awk's version # & other details as follows:
$ cat /etc/debian_version
10.11
$ awk -W version
mawk 1.3.3 Nov 1996, Copyright (C) Michael D. Brennan
...
$ cat /etc/debian_version
11.2
$ awk -W version
mawk 1.3.4 20200120
Copyright 2008-2019,2020, Thomas E. Dickey
Copyright 1991-1996,2014, Michael D. Brennan
...
-
From the
buster
list of packages:mawk (1.3.3-17+b3)
-
From the
bullseye
list of packagesmawk (1.3.4.20200120-2)
And of course, since gawk
has been in the RPiOS package repository for a while, installing that is also an option. The update-alternatives
utility can make the changes necessary to make gawk
the default for awk
. Once gawk
is declared the default for awk
, you can confirm that as follows:
$ awk -W version
GNU Awk 4.2.1, API: 2.0 (GNU MPFR 4.0.2, GNU MP 6.1.2)
Copyright (C) 1989, 1991-2018 Free Software Foundation.
...
Know that version 5 of gawk
is available in bullseye
's package repo, but the buster
repo is limited to version 4. ICYI, the LWN article mentioned in the References goes into some detail on the feature differences between gawk
ver 4 & ver 5.
⋀
Sometimes, finesse is over-rated. Sometimes things get misplaced, and you need to find it quickly. A feeling of panic may be creeping upon you due to impending schedule deadlines - or whatever the reason. This might help if you can remember anything at all about the filename - or its contents:
# search `/etc` recursively for filenames containing lines beginning w/ string 'inform'
# in this example binary files are excluded from the search by use of the 'I' option
# piping to pager 'less' avoids clutter in your terminal
$ sudo grep -rlI '/etc' -e '^inform' | less -N
/etc/dhcpcd.conf
$
Other times, the file you need to find is binary, or maybe you don't recall any of its contents, but you do recall part of the filename. In this situation, find
may be the right tool. Keep in mind that recursion is "free" when using find
, but you can limit the depth of the recursion. See man find
for the details; this may get you started:
$ find /some/path -name '*part-of-a-filename*'
Not a shell trick exactly, but useful: Most systems use the pager named less
to display man
pages in a terminal. Most frequently, man
pages are consulted for a reference to a specific item of information - e.g. the meaning of an argument, or to find a particular section. less
gives one the ability to search for words, phrases or even single letters by simply entering /
from the keyboard, and then entering a search term. This search can be made much more effective with the addition of a regular expression or regex to define a pattern for the search. This is best explained by examples:
- find the list of
flags
inman ps
:
Entering /flags
in less
will get you there eventually, but you'll have to skip through several irrelevant matches. Knowing that the list of possible flags
is at the beginning of a line suggests that a regex which finds a match with flags at the beginning of a line preceded by whitespace. Calling upon our mastery of regex suggests that the search expression should be anchored at the beginning of a line, followed by 0 or more spaces or tabs preceding our keyword flags
will do the job; i.e.: /^[ \t]*flags
- find the syntax of the
case
statement inbash
:
Again, as we are looking to match a term at the beginning of a line, use the ^
anchor, followed by the whitespace character class repeated 1 or more times [ \t]+
, followed by the search term case
. In this search, we'll look for matches having whitespace after the regex also: /^[ \t]+case[ \t]+
⋀
raspi-gpio
is a useful tool for those interested in working with external hardware. It's included as a standard package - even in the Lite
distro, but was developed by an individual - i.e. outside "The Foundation". The raspi-gpio GitHub repo has some useful resources; there is no man raspi-gpio
, but raspi-gpio help
will do just that. You can compare it against the gpio
directive... ponder that for a moment :)
⋀
You've probably used the "graphical" (ncurses -based) version of raspi-config
that you start from the command line, and then navigate about to make configuration changes to your system. However, you may also use raspi-config
from the command line. This feature isn't well-documented (or well-understood), and even the GitHub repo for raspi-config
doesn't have anything to say about it - actually, it says nothing about everything :P This blog post seems to be the best source of information for now.
⋀
It's occasionally useful to create a program/script that runs continuously, performing some task. Background, nohup and infinite loops are all ingredients that allow us to create daemons - very useful actors for accomplishing many objectives. Here's a brief discussion of these ingredients, and a brief example showing how they work together:
-
infinite loop: this is a set of instructions that run continuously by default; instructions that are executed repetitively until stopped or interrupted. In a literary sense, the infinite loop could be characterized as the daemon's beating heart.
The infinite loop provides the framework for a task that should be performed continuously; for example a software thermostat that monitors your home temperature, and turns a fan ON or OFF depending upon the temperature. Infinite loops may be set up in a number of different ways; see the references below for details. Since our topic here is shell tricks, we'll use the most common
bash
implementation of an infinite loop - thewhile
condition.Shown below is a complete, functional program (though not quite useful) that will be daemon-ized in the sequel, illustrating the simplicity of this recipe. You may copy and paste these 5 lines of code into a file on your RPi, save it as
mydaemond
, and make it executable (chmod 755 mydaemond
):#!/usr/bin/env bash while : do echo "Hello, current UTC date & time: $(date -u)" sleep 60 done
-
background (
&
) andnohup
:nohup
is a "command invocation modifier" , and the ampersand symbol&
is a "control operator" inbash
. These obscure, but powerful instructions are covered in the GNU documentation forbash
(&
), and GNU documentation for core utilities -coreutils
(nohup
). Used together, they can daemon-ize our simple script:&
will free your terminal/shell session for other activities by causing the script to run in the background, andnohup
allows it to continue running after your terminal or shell session is ended. To paraphrase Dr. Frankenstein, "It's alive!".Let's play Dr. Frankenstein for a moment, and bring
mydaemond
to life from our terminal:$ chmod 755 mydaemond $ nohup ./mydaemond & [1] 14530 $ nohup: ignoring input and appending output to 'nohup.out' $ logout
Note the output:
[1] 14530
; this line informs us primarily that the process id (PID) ofmydaemond
is14530
. The next line tells us what we already knew from reading thenohup
documentation - orman nohup
: the default case is to redirect all stdout to the filenohup.out
.The
logout
command ends this terminal session: the interactive shell from whichmydaemond
was launched - the parent process ofmydaemond
- no longer exists. Linux doesn't normally orphan processes, and as of nowmydaemond
has been adopted; its new parent process has PID1
. In Linux, PID 1 is reserved forinit
- a generic name for what is now calledsystemd
. mind blownThis can all be confirmed by launching a new terminal/login/SSH connection. Once you've got a new terminal up, there are at least two ways to confirm that
mydaemond
is still "alive":- Monitor
nohup.out
usingtail -f
as shown below:
$ tail -f nohup.out Hello, current UTC date & time: Tue 15 Mar 01:13:52 UTC 2022 Hello, current UTC date & time: Tue 15 Mar 01:14:52 UTC 2022 # ... etc, etc
- Ask
ps
for a report:
$ ps -eo pid,ppid,state,start,user,tpgid,tty,cmd | grep "^14530" 14530 1 S 01:13:51 pi -1 ? /bin/sh ./mydaemond
ps
is the more informative method. This may look complicated, but it's not. We've eschewed the old BSD syntax for the standard syntax. The-o
option (man ps
,OUTPUT FORMAT CONTROL
section) allows one to create a customized report using keywords defined in theSTANDARD FORMAT SPECIFIERS
section. Note theppid
(parent PID) is1
, corresponding tosystemd
's PID, thestart
time at01:13:51
, theuser
namepi
,tpgid
of-1
means not attached to a TTY, same astty
=?
and finally the issuing commandcmd
of./mydaemond
. All matching with actual history. For another view, try the commandpstree -pua
; the tree showsmydaemond
as a branch from thesystemd
trunk. - Monitor
-
OK, but how do I stop
mydaemond
? :mydaemond
has been instructional, but it has now served its purpose. To free up the resources it is now consuming, we mustkill
the process, and remove the contents of thenohup.out
file:$ kill 14530 $ # confirm kill $ ps -e | grep ^14530 $ # 'rm nohup.out' to remove the file; alternatively, empty file without removing it $ > nohup.out # empty the file
And that's it. ⋀
Having Bluetooth Issues? If you spend a week or so chasing Bluetooth problems on a Linux system, you begin to wonder: "Does Bluetooth on Linux just suck?" Unfortunately, I think the answer may be, "Yes, it does suck... at least on the Raspberry Pi Lite systems." I finally got fed up, and took the problem to the Raspberry Pi GitHub sites:
- First: in the RPi-Distro repo, where I was told this was a "Documentation issue", and should be filed in the Documentation repo.
- Second: in the Documentation repo, where I was told it was not a Documentation issue - it was a software (RPi-Distro) issue!
IOW - I got the run-around! And it gets worse: Apparently I have been banned from posting in the RPi-Distro repo for life! You see "The Organization" at its worst in these exchanges.
However: I have made some progress - see the recipes that begin with the word 'Bluetooth'. And I'm happy to say that most of the Bluetooth issues have been resolved! There are currently three (3) recipes dealing with Bluetooth audio for RPi Lite systems:
- Raspberry Pi Zero 2W; 'bookworm' Lite OS : This recipe focuses on a
pipewire
-based solution, where thepipewire
installation was taken from Debian's bookworm backports tree - instructions are in the recipe. This installation has since been returned to thestable
tree where it is runningpipewire ver 1.2.4
. It has been extremely reliable; although I did run into a hitch removing it from thebackports
tree. - Raspberry Pi 3A+; 'bookworm' Lite OS: This is a slightly older recipe, but remains valid. It began with an installation of the
bluez-alsa
repo, and then moved on topipewire
. Thepipewire
installation was from Debian's stable tree; it began with version0.3.65
, and was later upgraded viaapt
to version1.2.4
. And so this installation wound up at the same place as the Zero 2W installation. This recipe also contains instructions for setting up a modifiedsystemd [email protected]
; I feel this is a worthwhile modification. - Raspberry Pi 3A+ "Bluetooth Hardware Upgrade": I decided to try a "Bluetooth hardware upgrade"; i.e. a Bluetooth "USB dongle" to replace the built-in Raspberry Pi Bluetooth hardware. It's relatively inexpensive, it's easily configured, and it has worked extremely well in my RPi 3A+ bookworm system with
pipewire
.
I suppose I would be remiss if I failed to point out the value of persistence in reaching this point of Bluetooth Bliss with my Lite systems.
From time-to-time, we all need to make adjustments to the modification date/time of a file(s) on our system. Here's an easy way to do that:
$ touch -d "2 hours ago" <filename>
# "2 hours ago" means two hours before the present time
# and of course you can use seconds/minutes/days, etc
If OTOH, you want to change the modification time relative to the file's current modification time, you can do that as follows:
$ touch -d "$(date -R -r <filename>) - 2 hours" <filename>
# "- 2 hours" means two hours before the current modification time of the file
# Example: subtract 2 hours from the current mod time of file 'foo.txt':
$ touch -d "$(date -R -r foo.txt) - 2 hours" foo.txt
Those of you who have administration chores involving use of "Unix time" may appreciate this. This trick has been "hiding in plain sight" for quite a while now, but it can come in very handy when needed. In my case, I was dealing with the wakealarm
setting for a Real-Time Clock; I use it to turn one of my Raspberry Pi machines ON and OFF. The wakealarm
settings must be entered/written to sysfs
in Unix time format; i.e. seconds from the epoch. The problem was trying to figure out how many seconds will elapse from the time I halt
until I want to wake up 22 hours and 45 minutes later? Yes - I can multiply, but I'm also lazy :) How do I do this?
man date
tells us that the format key for Unix time is %s
:
%s seconds since 1970-01-01 00:00:00 UTC
So if I need to calculate the wakealarm
time for 10 hours from now, I can do that as follows (this one is fairly simple):
alarm=$(/usr/bin/date '+%s' -d "+ 10 hours")
...
$ echo $alarm
1723629537
Now, suppose I want to make a log entry indicating what time wakealarm
is set for? Mmmm - not simple!
... But it can be done like so...
echo "$(date -d "@$alarm" +'%c')"
Wed 14 Aug 2024 09:58:57 UTC
So "the trick" is to precede the variable ($alarm
in this case) with the @
symbol! The documentation is hidden here!
Let's assume you have started a long-running job from the shell: fg-bg.sh
#!/usr/bin/env bash
# this script runs a continuous loop to provide a means to test Ctrl-Z, fg & bg
# my $0 is 'fg-bg.sh'
while :
do
echo "$(date): ... another 60 seconds have passed, and I am still running" >> /home/pi/fg-bg.log
sleep 60
done
Now let's start this script in our terminal:
$ ./fg-bg.sh
# This process is running in the *foreground*, and you have lost access to your terminal window (no prompt!)
Now, enter ctrlz from your keyboard & watch what happens:
$ ./fg-bg.sh
^Z
[1]+ Stopped ./fg-bg.sh
$
# Note the prompt has returned, and 'fg-bg.sh' has been stopped/halted/suspended - no longer running
Now, let's suppose we want to re-start fg-bg.sh
, but we want to run it in the background so it doesn't block our terminal:
$ ./fg-bg.sh
^Z
[1]+ Stopped ./fg-bg.sh
$ bg
[1]+ ./fg-bg.sh &
# Note that 'fg-bg.sh' has been re-started in the background (see the '&'),
# And the command prompt has been restored. Confirm that 'fg-bg.sh' is running:
$ jobs
[1]+ Running ./fg-bg.sh &
$
Now, let's suppose that we want to monitor the output of fg-bg.sh
; i.e. monitor /home/pi/fg-bg.log
to check some things; we know that we can use tail -f
to do that:
$ ./fg-bg.sh
^Z
[1]+ Stopped ./fg-bg.sh
$ bg
[1]+ ./fg-bg.sh &
$ jobs
[1]+ Running ./fg-bg.sh &
$ tail -f /home/pi/fg-bg.log
Tue 20 Aug 19:09:37 UTC 2024: ... another 60 seconds have passed, and I am still running
Tue 20 Aug 19:10:37 UTC 2024: ... another 60 seconds have passed, and I am still running
Tue 20 Aug 19:11:37 UTC 2024: ... another 60 seconds have passed, and I am still running
OMG - another process has taken our command prompt away! Not to worry; simply enter ctrlz from your keyboard again, and then run jobs
again:
$ ./fg-bg.sh
^Z
[1]+ Stopped ./fg-bg.sh
$ bg
[1]+ ./fg-bg.sh &
$ jobs
[1]+ Running ./fg-bg.sh &
$ tail -f /home/pi/fg-bg.log
Tue 20 Aug 19:09:37 UTC 2024: ... another 60 seconds have passed, and I am still running
^Z
[2]+ Stopped tail -f /home/pi/fg-bg.log
$ jobs
[1]- Running ./fg-bg.sh &
[2]+ Stopped tail -f /home/pi/fg-bg.log
$
Note that fg-bg.sh
continues to run in the background, and that tail -f /home/pi/fg-bg.log
has now been stopped - thus restoring our command prompt. So cool :)
So hopefully you can now see some uses for ctrlz, fg
, bg
and jobs
. But you may be wondering, "How do I stop/kill these processes when I'm through with them?" Before answering that question, note the jobs
output; the numbers [1]
and [2]
are job ids or job numbers. We can exercise control over these processes through their job id; e.g.:
$ jobs
[1]- Running ./fg-bg.sh &
[2]+ Stopped tail -f /home/pi/fg-bg.log
$ kill %1
pi@rpi3a:~ $ jobs
[1]- Terminated ./fg-bg.sh
[2]+ Stopped tail -f /home/pi/fg-bg.log
pi@rpi3a:~ $ kill %2
[2]+ Stopped tail -f /home/pi/fg-bg.log
pi@rpi3a:~ $ jobs
[2]+ Terminated tail -f /home/pi/fg-bg.log
pi@rpi3a:~ $ jobs
pi@rpi3a:~ $
# ALL GONE :)
One final note: You can also use the job id to control fg
and bg
; for example if you had suspended a job using ctrlz, put it in the background (using bg
) as we did above, you could also return it to the foreground using fg %job_id
.
OK - so not as easy as you might think - at least not for all pages/files. For example, I needed to update my pico Debug Probe with the latest firmware recently. The URL was given as follows:
https://github.com/raspberrypi/debugprobe/releases/tag/debugprobe-v2.0.1
There are several files listed on the page; I needed the one named debugprobe.uf2
, but after trying various iterations of curl
, wget
, git clone
, etc I was becoming frustrated. But here's what worked:
$ wget "https://github.com/raspberrypi/debugprobe/releases/download/debugprobe-v2.0.1/debugprobe.uf2?raw=True" -O /home/pi/debugprobe.uf2
I've had the occasional problem with the /boot/firmware
vfat
filesystem somehow becoming un-mounted on my RPi 5. I've wondered if it has something to do with my use of an NVMe card (instead of SD), or the NVMe Base (adapter) I'm using. I have no clues at this point, but I have found a competent tool to help me troubleshoot the situation whenever it occurs: findmnt
. WRT documentation and usage explanations for findmnt
, I found three (3) very good "How-Tos":
- This post from Baeldung ranks as a model of clarity IMHO; the following are also quite good:
- How to Use the findmnt Command on Linux from 'How-To Geek', and
- findmnt Command Examples from the Linux Handbook.
As Baeldung explains, findmnt
is fairly subtle... it has a lot of capability that may not be apparent at first glance. All that said, my initial solution was a bash
script that uses findmnt
, and a cron
job:
The script:
#!/usr/bin/env bash
# My $0 is bfw-verify.sh; I am run from the root crontab
if findmnt /boot/firmware >/dev/null; then
# note we depend upon $? (exit status), so can discard output
echo "/boot/firmware mounted"
else
echo "/boot/firmware is NOT mounted"
# we correct the issue as follows:
mount /dev/nvme0n1p1 /boot/firmware
# can test $? for success & make log entry if desired
fi
The cron
job; run in the root crontab
:
0 */6 * * * /usr/local/sbin/bfw-verify.sh >> /home/pi/logs/bfw-verify.log 2>&1
This approach would seem to have wide applicability in numerous situations; for example: verifying that a NAS filesystem is mounted before running an rsync
job. However, it may fall short for trouble-shooting a mysterious un-mounting of the /boot/firmware
file system; the next script attempts to address that shortcoming.
Another feature of findmnt
that is better than using the simple script above in a cron
job is the --poll
option. --poll
causes findmnt
to continuously monitor changes in the /proc/self/mountinfo
file. Please don't ask me to explain what the /proc/self/mountinfo
file actually is - I cannot explain it :) However, you may trust that when findmnt --poll
uses it, it will contain all the system's mount points. Rather than get into the theoretical/design aspects of this, I'll present what I hope is a useful recipe for findmnt --poll
; i.e. how to use it to get some results. Without further ado, here's a bash
script that monitors mounts and un-mounts of the /boot/firmware
file system:
#!/usr/bin/env bash
# My $0: 'pollmnt.sh'
# My purpose:
# Start 'findmnt' in '--poll' mode, monitor its output, log as required
POLLMNT_LOG='/home/pi/pollmnt.log'
/usr/bin/findmnt -n --poll=umount,mount --target /boot/firmware |
while read firstword otherwords; do
case "$firstword" in
umount)
echo -e "\n\n $(date +%m/%d/%y' @ '%H:%M:%S:%3N) ==========> case: umount" >> $POLLMNT_LOG
sleep 1
sudo dmesg --ctime --human >> $POLLMNT_LOG
;;
mount)
echo -e "\n\n $(date +%m/%d/%y' @ '%H:%M:%S:%3N) ==========> case: mount" >> $POLLMNT_LOG
sleep 1
sudo dmesg --ctime --human | grep nvme >> $POLLMNT_LOG
;;
move)
echo -e "\n\n $(date +%m/%d/%y' @ '%H:%M:%S:%3N) ==========> case: move" >> $POLLMNT_LOG
;;
remount)
echo -e "\n\n $(date +%m/%d/%y' @ '%H:%M:%S:%3N) ==========> case: remount" >> $POLLMNT_LOG
sudo dmesg --ctime --human | grep nvme >> $POLLMNT_LOG
;;
*)
echo -e "\n\n $(date +%m/%d/%y' @ '%H:%M:%S:%N) ==========> case: * (UNEXPECTED)" >> $POLLMNTLOG
;;
esac
done
-
listmount() and statmount(); LWN article
I've used them both for a while, but to be honest, I've never given much thought to the differences. I always followed a "canned example" when I needed to transfer a file, but never considered that there might be differences worth much thought. I was probably wrong about that; here's a brief rundown:
# FROM: local TO: remote
$ scp local-file.xyz remote-user@hostname:/remote/destination/folder
# EXAMPLE:
$ scp pitemp.sh pi@rpi5-2:/home/pi/bin
# RESULT: local file 'pitemp.sh' is copied to remote folder 'home/pi/bin' on host 'rpi5-2'
# -------------------------------------
# FROM: remote TO: local
$ scp remote-user@hostname:/remote/folder/remote-file.xyz /local/folder
# EXAMPLE:
$ scp pi@rpi5-2:/home/pi/bin/pitemp.sh pitemp.sh ~/bin
# RESULT: remote file 'home/pi/bin/pitemp.sh' on host rpi5-2 is copied to local folder '~/bin'
And so we see that scp
transfers are specified completely from the command invocation. There are numerous options; see man scp
for details - and here's a brief, but informative blog post that summarizes the more noteworthy options.
# CONNECT TO ANOTHER HOST:
$ sftp remote-user@hostname
# EXAMPLE:
$ sftp pi@rpi2w
Connected to rpi2w.
sftp>
# YOU ARE AT THE `sftp` COMMAND PROMPT; YOU MUST KNOW SOME COMMANDS TO PROCEED!
sftp> help
Available commands:
# ... THE LIST CONTAINS APPROXIMATELY 33 COMMANDS THAT ARE AVAILABLE!
sftp> cd /home/pi/bin
sftp> pwd
Remote working directory: /home/pi/bin
sftp> ls
dum-dum.sh pitemp.sh
sftp> lcd ~/bin
sftp> lpwd
Local working directory: /home/pi/bin
sftp> lls
pitemp.sh
sftp> get dum-dum.sh
Fetching /home/pi/bin/dum-dum.sh to dum-dum.sh
sftp> lls
dum-dum.sh pitemp.sh
sftp> quit
$
# RESULT: remote file 'home/pi/bin/dum-dum.sh' on host rpi2w is copied to local folder '/home/pi/bin'
scp
is said to be faster (more efficient) than sftp
(not tested it myself). Both scp
and sftp
are built on SSH's authentication and encryption.
Here's what I feel is the key tradeoff between scp
and sftp
:
scp
is simple and succinct;OTOH ``sftp` might be considered more versatile .
Personally, I feel sftp
is better-suited to a situation where perhaps many files in several folders needed to be transferred in both directions between two hosts. But then, that's what rsync
does so well. This explains why scp
is my "go-to" for limited file transfers.
If you have a Raspberry Pi model Zero, 1, 2 or 3, you have no need for the rpi-eeprom
package. It's useful only on the RPi 4 and RPi 5 because they are the only two models with... EEPROM! But if you try to use apt
to remove (or purge) rpi-eeprom
, you'll find that rpi-eeprom
has been carelessly (stupidly?) packaged in such a way that several useful utilities will be swept out with it!
On my system ('bookworm-Lite, 64-bit'), there are 27 additional packages identified by apt
that will also be removed - either directly, or through sudo apt autoremove
. These packages include:
iw
rfkill
bluez
device-tree-compiler
dos2unix
pastebinit
uuid
pi-bluetooth
raspi-config
Recognize any of these :) ?
When I discovered this, I posted an issue on RPi's rpi-eeprom
repository at GitHub; I initially assumed there must be a good reason for this, and my initial post reflected that assumption. Afterwards, there was one comment that acknowledged the issue, and indicated a change should be made. And then the shit-storm started. Most of the rest of the comments from The Raspberries were either arrogant, condescending, false, a waste of time - or all of the above.
The apparent "leader" of this repo - Chief Know-Nothing - weighed in saying that the rpi-eeprom
package needed a "bit of polish", but indicated that it was a low priority. (I guess Chiefs are v. busy at RPi?) When I called the "bit of polish" remark a gross understatement, my comment was deleted, and I was banned for life from all raspberrypi repos! :) Giving censorship privileges to those whose job is software maintenance seems strange & risky management policy to me, but perhaps the thinking is "these are all bright, responsible Cambridge lads - they know how to behave"?
Anyway - until Chief Know-Nothing is compelled to move on this, there's not much to do. One could use apt-mark
to "pin" packages for non-removal... There are also some relevant Q&A that discuss how to handle this situation with the help of dpkg
and aptitude
: 1, 2, 3, 4. However, AIUI, all of these will ultimately depend upon the party that prepared the packages having some minimal level of competence. And given the messy state of these packages now, I tend to doubt Chief Know-Nothing has that level of competence.
In the course of doing things on my systems, I occasionally need to 'move' (mv
), 'copy' (cp
), or 'install' (install
) files and/or folders from one location to another. And occasionally, I will screw up by accidentally over-writing (effectively deleting) a file (or folder) in the destination. Fortunately, the folks responsible for GNU software have developed 'command options' for avoiding these screw-ups.
Among these 'command options' are -n, --no-clobber
, -u, --update
, and -b, --backup
. In many cases, the -b, --backup
option has some compelling advantages over the other options. Why? When using cp
, mv
or install
in a script, the objective is to get the job done without any screw-ups. That's what the backup
options do. The noclobber
and update
options may prevent the screw-ups, but they don't get the job done.
The backup
option documentation on GNU's website is very good. Any unanswered questions may be explored with a bit of testing. So let's try the backup
options to get a feel for how they work:
$ ls -l ~/
-rw-r--r-- 1 pi pi 14252 Mar 4 2023 paradiselost.txt
drwxr-xr-x 4 pi pi 4096 Nov 25 04:35 testthis
$ ls -l ~/testthis
-rw-r--r-- 1 pi pi 14252 Nov 25 04:48 paradiselost.txt
# Note the difference in mod times of the two files;
# i.e. the files are different, but have the same name
$ cp -a paradiselost.txt ~/testthis
$ ls -l ~/testthis
-rw-r--r-- 1 pi pi 14252 Mar 4 2023 paradiselost.txt
# The file has been "over-written" by an older version!
# IOW - a **screw-up**!
# Let's reset & try with a 'backup' option
$ ls -l ~/testthis
-rw-r--r-- 1 pi pi 14252 Nov 25 05:08 paradiselost.txt
$ cp -ab paradiselost.txt ./testthis
$ ls -l ~/testthis
-rw-r--r-- 1 pi pi 14252 Mar 4 2023 paradiselost.txt
-rw-r--r-- 1 pi pi 14252 Nov 25 05:08 paradiselost.txt.~1~
# We see that the **original** file (Nov 25 mod) has been 'backed up';
# i.e. a '.~1~' has been appended to the file name. Using the '-b'
# option gives the same result as: '--backup=numbered' option.
This is ideal for use in automated scripts as it does the safe thing; i.e. doesn't overwrite potentially important files. The documentation is here (for cp
), and here (for the backup
options specifically)
The -b, --backup
option is (AFAIK) available only in GNU's coreutils versions of cp
and mv
. As usual Apple sucks, or is bringing up the rear, by eschewing the newer GPL licensing terms... HOWEVER, there are options for Apple/macOS users: MacPorts offers the coreutils
package, or possibly through one of the other macOS package managers.
This is a rather simple-minded application of the rather sophisticated utility called socat
. In the context of this recipe, testing a network connection means that we wish to verify that a network connection is available before we actually use it. You might wonder, "How could that be useful?"... and that's a fair question. I'll answer with an example:
Example: Verify a NAS file server is online before starting a local process
Let's say that we have a cron
job named loggit.sh
scheduled to run @reboot
on HOST1. loggit.sh
reads data from a number of sensors, and logs that data to the NAS_SMB_SRVR. HOST1 and NAS_SMB_SRVR are connected over our local network. And so, before HOST1 begins writing the sensor data to NAS_SMB_SRVR, we want to ensure that the network connection between them is operational.
We will use socat
in loggit.sh
to verify the network connection is viable:
...
# test network connection to NAS using SMB protocol
while :
do
socat /dev/null TCP4:192.168.1.244:445 && break
sleep 2
done
...
# send sensor data to log files on NAS_SMB_SRVR
Let's break this down:
- the
socat
command is placed in an infinitewhile
(oruntil
) loop w/ a 2 secsleep
per iteration socat
uses a SOURCE DESTINATION format:/dev/null
is the SOURCE (HOST1);TCP4:192.168.1.244:445
is the DESTINATION (protocol:ip-addr:port
for NAS_SMB_SRVR)- if
socat
can establish the connection (i.e.$? = 0
), we usebreak
to exit the loop
socat
is a versatile & sophisticated tool; in this case it provides a reliable test for a network connection before that connection is put into use.
- GNU's
bash
Reference Manual - in a variety of formats - GNU's Core Utilities - 'coreutils' - in a variety of formats
- Shell Builtin Commands - an index to all the builtins
- Bash POSIX Mode; a brief guide for using
POSIX mode
inbash
- Baeldung's Linux Tutorials and Guides - excellent & searchable
- Wooledge's Bash Guide; can be puzzling to navigate, may be a bit dated, but still useful
- How to find all the
bash
How-Tos on linux.com ; this really shouldn't be necessary! - commandlinefu.com - a searchable archive of command line wisdom
- Cool Unix and Linux CLI Commands - nearly 10,000 items!
- Assign Output of Shell Command To Variable in Bash; a.k.a. command substitution
- Difference .bashrc vs .bash_profile (which one to use?); good explanation & good overview!
- Q&A: What is the purpose of .bashrc and how does it work?
- Q&A: What is the .bashrc file?
- What is Linux bashrc and How to Use It
- Q&A: How to reload .bash_profile from the command line?
- Q&A Setting up aliases in zsh and more.
- How to Create and Remove alias in Linux
- The alias Command
- Bash aliases you can’t live without
- How to Create Aliases and Shell Functions on Linux; aliases, functions & where they're saved.
- Unix/Linux Shell Functions explained at tutorialspoint.
- Q&A: Alias quotation and escapes; aliases and functions - an example
- Advanced Bash-Scripting Guide - useful, but a bit out-of-date; 10 Mar 2014 when last checked
- Q&A: In a Bash script, how can I exit the entire script if a certain condition occurs?
- Command Line Arguments in Bash - a good, brief overview.
- How to Create & Use
bash
Scripts - a very good tutorial by Tania Rascia - Passing arguments to bash:
- How to Pass Arguments to a Bash Script - an article on Lifewire.com.
- Parsing bash script options with
getopts
- a short article by Kevin Sookocheff. - A small
getopts
tutorial (p/o the bash hackers wiki) - Q&A on StackOverflow: (How to get arguments with flags in
bash
)
- Q&A re use of the
shebang
line - Bash Infinite Loop Examples - infinite loops
- Bash Scripting – the
while
loop - infinite loops - How to loop forever in bash - infinite loops
- Create A Infinite Loop in Shell Script - infinite loops
- Infinite while loop - infinite loops
- Q&A: Terminating an infinite loop - infinite loops
- Functions in bash scripting from Ryan's Tutorials - a good and thorough overview w/ examples.
- Q&A: Shell scripting: -z and -n options with if - recognizing null strings
- Q&A re executing multiple shell commands in one line; sometimes you don't need a script !
- "Filename expansion"; a.k.a. "globbing"; what is it, and why should I care?
- A GitHub repo of globbing; odd choice for a repo methinks, but contains some useful info.
- Globbing and Regex: So Similar, So Different; some of the fine points discussed here.
- Writing to files using
bash
. Covers redirection and use oftee
- Using formatted text in your outputs with
printf
: REF 1, REF 2 - beatsecho
every time! - sh - the POSIX Shell ; from Bruce Barnett, aka Grymoire
- How to Safely Exit from Bash Scripts; executing v. sourcing a script & role of exit v. return (Baeldung)
- The Geek Stuff: Bash String Manipulation Examples – Length, Substring, Find and Replace
- The Tutorial Kart: bash string manipulation examples
- Bash – Check If Two Strings are Equal - learn to compare strings in a shell script by example.
- Q&A: Replacing some characters in a string with another character; using
tr
andbash
built-ins. - Q&A: remove particular characters from a variable using bash; a variety of methods!
- Advanced Bash-Scripting Guide: Chapter 10.1. Manipulating Strings; details!
- The Wooledge Wiki is a trove of string manipulation methods for
bash
.
- Q&A Can grep return true/false or are there alternative methods?.
- Q&A grep on a variable.
- Grep OR – Grep AND – Grep NOT – Match Multiple Patterns;
grep -E "PATTERN1|PATTERN2" file
- How To find process information in Linux -PID and more
- Q&A How do I find all files containing specific text on Linux? - a popular Q&A
- The GNU grep manual - recommended by
man grep
!
- The state of AWK - an extensive article in the May 2020 issue of LWN
- Learn AWK - a comprehensive tutorial from tutorialspoint.com.
- The GNU awk User's Guide - The Real Thing
- References explaining the many flavors of
awk
:
- Q&A: What are the shell's control and redirection operators?
- Redirect stderr to stdout, and redirect stderr to a file - from nixCraft
- 15 Special Characters You Need to Know for Bash - a collection of useful bits and bobs from HTG
- Linux - Shell Basic Operators; a quick overview on a single page.
- Section 7.1. Test Constructs from Advanced Bash-Scripting Guide; e.g.
[ ]
vs.[[ ]]
- testing - Using Square Brackets in Bash: Part 1 ; what do these brackets
[]
do exactly? - Using Square Brackets in Bash: Part 2 ; more on brackets
- All about {Curly Braces} in Bash ; how do you expect to get on in life without
{}
?? - Section 7.2. File test operators from Advanced Bash-Scripting Guide; e.g.
test
ifregular
file w/-f
- Section 7.3. Other Comparison Ops from Advanced Bash-Scripting Guide; e.g. integers, strings,
&&
,||
- How do I use sudo to redirect output to a location I don't have permission to write to?
- Q&A: How do I clear Bash's cache of paths to executables? help with
which
& alternatives - Q&A: Why isn't the first executable in my $PATH being used? more help with
which
- Q&A: Why not use “which”? What to use then? more on
which
, alternatives &hash
for cache updates
- How to Use Your Bash History in the Linux or macOS Terminal; a How-To-Geek article
- Q&A: Where is bash's history stored?; good insights available here!
- Using History Interactively - A bash User's Guide; from the good folks at GNU.
- The Definitive Guide to Bash Command Line History; not quite - but it's certainly worth a look.
- How To Use Bash History Commands and Expansions on Linux; useful
- Bash History Command Examples; 17 of them at last count :)
- Improved BASH history for ...; a MUST READ; yeah - this one is good.
- Using Bash History More Efficiently: HISTCONTROL; from the Linux Journal
- Preserve Bash History in Multiple Terminal Windows; from baeldung.com
- 7 Tips – Tuning Command Line History in Bash; is good
- Working With History in Bash; useful tips for
bash
history
- Uses for the Command Line in macOS - from OSX Daily
- Deleting all files in a folder, but don't delete folders
- Removing all files in a directory
- Q&A re clever use of
xargs
- Lifewire article explains How to Display the Date and Time Using Linux Command Line
- Exit status of last command using PROMPT_COMMAND (An interesting thing worth further study)
- Q&A: How to convert HTML to text?; short answer:
curl <html URL> | html2text
- Use
findmnt
to check if a filesystem/folder is mounted;findmnt
- Q&A: How to create a link to a directory - I think he got it right!
- How To Read And Work On Gzip Compressed Log Files In Linux
- Using anchor ^ pattern when using less / search command; find what you need in that huge
man
page - Regular-Expressions.info: the premier regex website - really useful & detailed
- Q&A: What key does
M
refer to innano
? - Wooledge on "Why you shouldn't parse the output of ls(1)"
- Listing with
ls
and regular expression - related to the Wooledge reference above. - Q&A: How can I change the date modified/created of a file?