Article
Linux Command Line for DevOps — Recognition First
We use one recurring example throughout: TaskNote, a small to-do list web app with a Node.js API (Application Programming Interface — the way programs talk to each other) and a PostgreSQL (popular open-source database) backend. It runs as a systemd service on an Ubuntu server.…

- Published on

A note before section 1
We use one recurring example throughout: TaskNote, a small to-do list web app with a Node.js API (Application Programming Interface — the way programs talk to each other) and a PostgreSQL (popular open-source database) backend. It runs as a systemd service on an Ubuntu server. Every command we teach will, sooner or later, be used to inspect, deploy, debug, or back up TaskNote.
The default operating system in this tutorial is Ubuntu / Debian Linux. The default shell is bash. When something differs on Red Hat-family Linux (RHEL, AlmaLinux, Rocky), we mention it in one line.
The goal is recognition, not memorization. After one read, you should look at any DevOps script and understand what every command does — even if you have not memorized every flag.
1. The Mental Model
A DevOps engineer spends most of the day in a shell (a program that reads what you type and runs other programs) because everything in Linux is a small program that does one job. The shell glues those programs together with pipes and redirects. Once you internalize the pattern, the whole landscape stops feeling like memorization.
Think of it like… an assembly line. Each station does one small task. The conveyor belt (the pipe) carries the work from one station to the next. You change the line by adding or removing stations, not by rewriting the whole factory.
In software, this looks like… finding the top ten busiest IPs hitting TaskNote is one line: read the access log → cut the first column → sort → count → sort by count → keep ten. Five small programs, one pipe each.
Why it matters: the same five programs solve a hundred problems. You learn each one once.
The five jobs the shell helps you do
- Move around and find things —
cd,ls,find,grep. - Read and change files —
cat,tail,sed, redirects. - Control processes —
ps,kill,systemctl,tmux. - Talk to other machines —
ssh,curl,rsync,dig. - Automate the repeat — bash scripts, cron, systemd timers.
Every section in this tutorial maps to one of these five jobs.
Don't confuse with… a terminal (the window or app you type into — Terminal.app, GNOME Terminal, iTerm2) is not the shell. The terminal is the screen; the shell is the program inside it.
2. Survival Kit — Don't Get Stuck
Before any command, the basics that make the shell usable.
The prompt
A prompt (the text the shell prints before each command, telling you who and where you are) often looks like deploy@tasknote-prod-1:/srv/tasknote$. Reading it left-to-right: user, hostname, current working directory (the folder the shell is "in" right now, shown by the pwd command), then $ for a normal user or # for root (the all-powerful admin user).
Tab completion — the most important key
Press Tab to complete a command name, file name, or option. Press Tab Tab to list candidates if there are several. This single key prevents 80 % of typos and teaches you what is available.
History
- Arrow Up / Down — walk through previous commands.
- Ctrl-R — reverse search (start typing, press Ctrl-R again to step backward).
!!— repeat the last command (sudo !!runs the previous command as root).!$— last argument of the previous command.
Cancel and clear
- Ctrl-C — cancel the current command.
- Ctrl-D — end of input (logs out a shell, exits an editor's input mode).
- Ctrl-L or
clear— clear the screen.
Getting help
A command (a program you can run from the shell, like ls or grep) almost always has built-in help.
man <cmd>— the manual page (the canonical reference, scrollable, often dense).<cmd> --help— the short version, fits on a screen.tldr <cmd>— community examples, one screen of "real cases."
Install tldr once: sudo apt install tldr. It is the friendlier man.
What error messages really mean
| Message | Real meaning | Fix |
|---|---|---|
command not found | The shell cannot find the program in PATH (Section 3). | Install it, or check the typo. |
permission denied | You lack rights to read/write/execute that path. | ls -l to see the permission, then chmod/chown (Section 7). |
no such file or directory | The path you typed does not exist. | Check spelling; pwd to see where you are. |
is a directory | You tried to read a folder as a file. | Use ls to list it, or cd into it. |
3. Files and Directories — Moving Around
We define every concept the first time it appears.
- An absolute path (a path that starts at
/, the filesystem root, and works from anywhere) —/srv/tasknote/server.js. - A relative path (a path interpreted from the current working directory) —
server.jsor../config/. - The home directory (your personal folder, also written
~) —/home/deploy. - A hidden file (any file or folder starting with a dot — also called a "dotfile") —
.bashrc,.ssh/. They do not appear inlsunless you ask for them.
pwd — print working directory
pwd shows the absolute path of where you are.
pwd
# /home/deploy
cd — change directory
cd /srv/tasknote # absolute path
cd uploads # relative — jump into ./uploads
cd .. # go up one
cd ~ # go home
cd - # back to the previous directory
ls — list files
ls lists files in a directory.
| Flag | Effect |
|---|---|
-l | long format with size, date, owner |
-a | include hidden files (dotfiles) |
-h | human-readable sizes (12K, 4.3M) |
-t | sort by modification time (newest first) |
-S | sort by size |
ls -lah is the workhorse combination.
mkdir — make directories
mkdir build
mkdir -p /srv/tasknote/uploads/2026-05 # -p creates parent dirs as needed
cp — copy files
cp config.example.env .env # one file
cp -r src/ build/ # -r copies a directory tree
cp -p file dest # -p preserves timestamps + permissions
cp -i src dest # -i prompts before overwrite
mv — move or rename
mv old-name new-name # rename
mv release-1.4.0/ /srv/tasknote/ # move a directory
rm — remove files
Destructive command warning.
rmdoes not move things to a trash bin. There is no undo.rm -rf <path>deletes the whole tree under<path>without asking. Double-check the path before you press Enter, every time.
rm file.tmp # one file
rm -r old-builds/ # -r recursive (a directory)
rm -rf node_modules/ # -f skips prompts (think twice)
rm -i secret.txt # -i prompts for each file
touch, stat, file
touch newfile— create an empty file (or update its timestamp).stat path— show inode (the on-disk record holding a file's metadata: size, owner, timestamps, blocks), size, owner, timestamps.file path— guess what kind of file (text, ELF binary, gzip, JSON).
Symlinks vs hard links
- A symlink (soft link — a file that points by name to another path) is a shortcut. Delete the target, the link breaks.
- A hard link (a second name for the exact same inode) is a second name for the same data. Delete the original, the data still lives under the other name.
ln -s /srv/tasknote/current /srv/tasknote/release-1.4.0 # symlink
ln file.txt second-name # hard link
Think of it like… a symlink is a sticky note saying "the book is in the library, shelf B7." A hard link is a second cover stitched onto the same book.
realpath, readlink
realpath path— resolve all symlinks, print the absolute path.readlink path— show what a symlink points to (without following further).
Wildcards (globs)
A glob (a pattern the shell expands into matching file names — *, ?, [abc], {a,b}) turns one term into many.
| Pattern | Matches |
|---|---|
*.log | every file ending in .log |
access?.log | access1.log, accessB.log (one char) |
app[12].log | app1.log or app2.log |
{prod,staging}.env | brace expansion → prod.env staging.env |
Real situation — TaskNote uses a
currentsymlink pointing at the latest release directory. Deploys flip the symlink atomically withln -sfn, so a rollback is one symlink change.
4. Reading and Writing Files
cat — concatenate and print files
cat writes a file to standard output. Good for short files; do not use it on a 2 GB log.
cat /etc/hostname
cat header.txt body.txt > combined.txt # joins two files (note > below)
less — page through a file
less opens a file in a scrollable pager. q quits. /pattern searches forward, n for next, G to the end, g to the start.
Common pagers: less (default, scroll up/down), more (legacy, simpler — recognize only), bat (syntax-highlighted modern alternative).
head and tail
head -n 20 file— first 20 lines.tail -n 50 file— last 50 lines.tail -f file— follow the file, printing new lines as they arrive (the live-log workhorse).
Real situation — "I want to see the last 50 lines of an Nginx access log live as new requests come in." →
tail -n 50 -f /var/log/nginx/access.log. Press Ctrl-C to stop.
echo and printf
echo "hello"— print a string with a newline.printf "%-10s %s\n" name value— formatted output, like Cprintf. Preferprintfin scripts when format matters.
Redirects
A redirect (>, >>, <, 2>, &> — sends stdin/stdout/stderr to a file) is how the shell wires commands to files.
| Redirect | Effect |
|---|---|
cmd > file | write stdout to file (overwrite) — destructive |
cmd >> file | append stdout to file |
cmd < file | feed file as stdin |
cmd 2> err.log | write stderr to a file |
cmd 2>&1 | merge stderr into stdout |
cmd &> all.log | write both stdout and stderr to a file (bash) |
Destructive command warning.
>overwrites the target file silently.echo "" > /etc/important.confwill erase the file. Use>>to append, or save to a new name first.
tee — write and pass through
tee writes its input to a file and to stdout, so you can see what is being saved.
# Save the date, but also see it on the screen.
date | tee build.log
tee -a appends instead of overwriting.
truncate and : — clear a log file
To clear a log file without deleting it (so the running app keeps writing to the same inode):
: > /var/log/tasknote.log # clears it; faster than truncate
truncate -s 0 /var/log/tasknote.log
Heredocs — preview
A here-document (a way to feed a multi-line block as stdin, marked by <<EOF … EOF) lets you embed multi-line input in a script. Section 19 covers them in full. Preview:
cat <<EOF > /etc/tasknote/welcome.txt
Welcome to TaskNote.
Today is $(date +%F).
EOF
5. Searching for Things
find — find files by name, type, age, size
find walks a directory tree and prints paths that match a test.
# Files named *.log under /var/log, modified in the last 2 days.
find /var/log -name '*.log' -mtime -2 -type f
| Test | Meaning |
|---|---|
-name 'pat' | name matches glob |
-iname 'pat' | name matches glob, case-insensitive |
-type f / -type d | regular file / directory |
-mtime -7 | modified in the last 7 days |
-size +100M | larger than 100 MB |
-user deploy | owned by deploy |
-not -name '*.gz' | invert the match |
-exec runs a command on each match. {} is the path; \; ends the command.
# Compress every .log older than 30 days.
find /var/log -name '*.log' -mtime +30 -exec gzip {} \;
Destructive command warning.
find ... -deleteremoves files instantly. Run the command without-deletefirst to see what would be removed.
fd — modern, friendlier find
fd (a faster, simpler alternative to find with sane defaults) gives the same answers in less typing.
fd '\.log$' /var/log # default: regex on filename
fd -e log /var/log # by extension
grep — search inside files
grep prints lines matching a pattern. The most-used Unix command.
| Flag | Effect |
|---|---|
-i | case-insensitive |
-r | recurse into directories |
-v | invert (lines that do not match) |
-E | extended regex (no need to escape + or ?) |
-l | only print file names with a match |
-n | print line numbers |
-A 3 -B 3 -C 3 | show 3 lines after / before / around |
--include='*.js' | only search files matching a glob |
# Every JS file under src/ that contains TODO, with line numbers.
grep -rn --include='*.js' TODO src/
ripgrep (rg) — modern default
ripgrep (faster grep that respects .gitignore by default) is the recommended default. Same mental model as grep, less typing.
rg TODO src/ # auto-ignores node_modules, .git
rg -A 3 'panic:' /var/log
which, type, command -v — find an executable
which python3— path to the binary on PATH.type python3— whatpython3is (binary, alias, function).command -v python3— script-friendly version, useful inifchecks.
6. Text Pipelines — The Real Power of the Shell
This is the heart of shell mastery.
The pipe |
A pipe (| — connects the standard output of one command to the standard input of the next) is how small commands compose into big workflows. cmd1 | cmd2 runs both at the same time, streaming output from one to the other.
Think of it like… stations on an assembly line. Each station receives output from the one before, does its small job, hands it on.
We now meet the small commands you stack between pipes.
sort
sort orders lines.
| Flag | Effect |
|---|---|
-n | numeric sort |
-r | reverse |
-k 2 | sort by the second whitespace field |
-u | unique (drops duplicates) |
-h | human-numeric (12K, 4M, 2G) |
uniq
uniq collapses adjacent duplicate lines. Inputs must be sorted first.
uniq -c— prefix each line with its count.uniq -d— only print duplicates.
cut
cut slices columns out of each line.
cut -d ':' -f 1 /etc/passwd # -d sets the delimiter, -f picks fields
wc
wc counts.
wc -l file— lines.wc -w file— words.wc -c file— bytes.
tr
tr translates, squeezes, or deletes characters.
echo "Hello" | tr 'a-z' 'A-Z' # to uppercase
echo "a,b,,c" | tr -s ',' # squeeze repeats
echo "abc 123" | tr -d '0-9' # delete digits
sed — stream editor
sed edits a stream. The two patterns you actually use:
# Substitute: change "old" to "new" everywhere.
sed 's/old/new/g' file.txt
# Edit a file in place (back up to .bak first!).
sed -i.bak 's/127.0.0.1/0.0.0.0/g' /etc/myapp.conf
# Delete every line containing "DEBUG".
sed '/DEBUG/d' app.log
Destructive command warning.
sed -irewrites the file. Always pass a backup suffix (-i.bak) or run on a copy first.
awk — field-oriented mini-language
awk splits each line into fields ($1, $2, …) and runs a small program over them.
# Print the second column of an Nginx access log (the user-agent in this layout).
awk '{print $2}' access.log
# Lines where the third column (response time) is > 100 ms.
awk '$3 > 100' timings.log
# Sum the bytes column.
awk '{sum += $10} END {print sum}' access.log
You can ignore 90 % of awk's features and still use it daily.
xargs — turn lines into arguments
xargs reads lines from stdin and runs a command using them as arguments. Pairs perfectly with find.
# Compress every .log file in the list.
find /var/log -name '*.log' | xargs gzip
# Safer with NULL-separated lines for paths with spaces.
find /var/log -name '*.log' -print0 | xargs -0 gzip
xargs -n 1 runs one argument at a time; xargs -P 4 runs four in parallel.
tee — branch the pipe
We met tee earlier. In a pipeline it splits a stream so you can save intermediate output:
journalctl -u tasknote | tee tasknote.log | grep ERROR
jq — JSON queries
A JSON (JavaScript Object Notation — a key-value text format used by APIs) processor in your shell. Everything that emits JSON pairs with jq.
# Pretty-print.
curl -s api/health | jq
# Pull a field.
curl -s api/health | jq '.status'
# Loop and select.
kubectl get pods -o json | jq '.items[] | select(.status.phase=="Running") | .metadata.name'
yq — YAML cousin
yq does for YAML what jq does for JSON. Same query language in modern versions.
yq '.spec.replicas' deployment.yaml
Six real DevOps one-liners
Each line is dissected into pipe stages.
Top 10 IPs hitting the server:
awk '{print $1}' access.log | sort | uniq -c | sort -rn | head
awk '{print $1}'— extract the first column (IP).sort— bring duplicate IPs together.uniq -c— collapse and count.sort -rn— sort by count, descending.head— top 10.
Biggest folders under /var:
sudo du -sh /var/* 2>/dev/null | sort -h | tail
du -sh— size of each child of/var.2>/dev/null— silence permission errors.sort -h— human-numeric sort (12K < 4M < 2G).tail— biggest items at the bottom.
Pull the API version from a JSON endpoint:
curl -fsS https://tasknote.example.com/health | jq -r '.version'
Find log lines with errors in the last hour:
journalctl -u tasknote --since "1 hour ago" -p err | wc -l
Names of running pods (Kubernetes):
kubectl get pods -o json | jq -r '.items[] | select(.status.phase=="Running") | .metadata.name'
HTTP status code distribution from an access log:
awk '{print $9}' access.log | sort | uniq -c | sort -rn
7. Permissions, Ownership, and sudo
Every file has an owner, a group, and three sets of three permission bits.
The rwxrwxrwx model
Run ls -l and read the first column.
-rwxr-xr-- 1 deploy www-data 1234 May 3 10:00 deploy.sh
- The first character:
-regular file,ddirectory,lsymlink. - Three rwx triplets: owner, group, others.
r— read.w— write.x— execute (or "enter" for a directory).
- Then the owner (
deploy) and the group (www-data).
Octal modes
Each rwx triplet is a number 0–7. Octal mode (a 3- or 4-digit number that encodes the rwx bits) is the script-friendly form.
| Mode | Meaning | Used for |
|---|---|---|
| 644 | rw-r--r-- | normal file |
| 755 | rwxr-xr-x | executable, public read |
| 600 | rw------- | owner-only file (env files, SSH keys) |
| 700 | rwx------ | owner-only directory (SSH dir) |
chmod, chown, chgrp
chmod 600 .env— set permissions (numeric).chmod +x deploy.sh— add execute (symbolic).chown deploy:deploy /srv/tasknote— change owner and group.chgrp www-data /srv/tasknote/uploads— change group only.chmod -R 755 /srv/tasknote— recursive.
Anti-pattern.
chmod 777"to fix permissions" makes the file world-writable. It hides the real problem (wrong owner) and creates a security hole. Diagnose withls -lfirst.
umask
The umask (the permission bits stripped from new files by default — typically 022) decides default permissions. With umask 022, new files are 644, new directories 755.
sudo vs su
sudo cmd(runs one command as root, asking for your password) is the modern, audited path.su -(switches to a full root shell — discouraged for daily use) is the old way.
Warning. Running everything as root is a bug, not a feature. Most scripts should run as a service user;
sudois for the few moments you really need it.
id, whoami, groups
whoami— your current username.id— your UID, primary group, all groups.groups deploy— groups a user belongs to.
visudo and /etc/sudoers
The /etc/sudoers file (the file that controls who can run sudo, edited only with visudo) must be edited with visudo so a syntax error does not lock you out.
Real situation — "Your deploy script can't read the .env file." Run
ls -l /srv/tasknote/.env. Owner isroot, mode is600. Either change owner (sudo chown deploy: .env) or change mode (sudo chmod 640 .envand add deploy to the right group). Diagnose before you flip bits.
8. Users and Groups
Recognition-level coverage.
The four files to know about
/etc/passwd— user database (one line per user)./etc/group— group database./etc/shadow— password hashes (root-readable only)./etc/skel/— template files copied to a new user's home.
Commands
sudo useradd -m deploy— create userdeploywith a home directory.sudo passwd deploy— set or change the password.sudo usermod -aG sudo deploy— adddeployto thesudogroup (-aappend,-Ggroup).sudo userdel -r deploy— delete user and home dir.sudo groupadd app— create a group.
The deploy user pattern
DevOps scripts run as a non-root user named deploy, app, or the app's name. The user owns /srv/<app>/, has SSH keys for CI, and can sudo systemctl restart <app> via a small sudoers.d rule. Never run app code as root.
9. Processes — Seeing and Controlling What's Running
A process (a running program — has a PID, parent, environment, open files) is the unit Linux schedules. Every process has a PID (Process ID — a number that uniquely identifies a process while it runs) and a parent process (the process that started it).
ps — process snapshot
The two patterns you will see:
ps aux # BSD-style, classic on Linux
ps -ef # System V style, equivalent in spirit
Both list every process with PID, user, CPU%, memory%, command. Pipe to grep to filter:
ps aux | grep tasknote
Interactive viewers
top— built-in, every Linux box has it.htop(interactive top with colors and easy navigation) — install withapt install htop. The default for daily use.btop(prettier modern alternative) — install if you want fancier visuals.glances(broad system view, good for quick overview).
pgrep, pkill
pgrep tasknote— PIDs of processes whose name matches.pkill tasknote— send a signal to processes by name.
Signals — kill and friends
A signal (a small message the kernel delivers to a process — SIGTERM, SIGKILL, SIGHUP) asks a process to do something.
| Signal | Meaning |
|---|---|
SIGTERM (15) | "please stop" — default, lets the process clean up |
SIGKILL (9) | "die now" — kernel kills it, no cleanup |
SIGHUP (1) | "reload" by convention (re-read config) |
SIGINT (2) | what Ctrl-C sends |
kill 12345 # SIGTERM
kill -9 12345 # SIGKILL — last resort, no cleanup
kill -HUP 12345 # ask the daemon to reload its config
Why it matters: always try
SIGTERMfirst.kill -9skipstraphandlers and can leave half-written files.
Foreground, background, jobs
cmd &— run in the background (the shell does not wait; you keep your prompt).Ctrl-Z— pause the current foreground process.jobs— list backgrounded jobs in this shell.fg %1— bring job 1 to the foreground (the shell waits for it; Ctrl-C goes to it).bg %1— resume job 1 in the background.
nohup and disown
nohup long-job.sh &— run a job that survives if you log out.disown %1— detach a job from the shell so it survives logout.
In practice, tmux (Section 10) is the more common solution.
nice and renice
nice -n 10 cmd— start a command at a lower priority.renice -n 10 -p 12345— change priority of a running process.
Not a daily tool — but you will see it in cron jobs that should not steal CPU.
lsof — list open files
Every socket and open file is visible here. The most useful incantation:
sudo lsof -i :8080 # what process holds port 8080
sudo lsof -p 12345 # what files PID 12345 has open
A daemon (a long-running background process, often a service)
Daemons are processes with no terminal, started by systemd (Section 15) or by nohup. The convention: their names often end in d (sshd, cron, dockerd).
10. Terminal Multiplexers — tmux Survival Kit
Every DevOps engineer ends up using one. SSH disconnects. You go home. The 4-hour database migration must keep running.
A terminal multiplexer (a program that runs a terminal session on the server, so you can attach and detach without losing it) solves this. tmux is the modern default; screen is the legacy one you will see in old docs.
The mental model
A tmux server runs on the machine. It holds sessions. Clients (your laptop, a colleague's laptop) can attach to a session, see the same shell, and detach without killing it.
Survival kit — twenty minutes
tmux new -s deploy # start a new session named "deploy"
tmux ls # list sessions on this server
tmux attach -t deploy # re-attach to it
Inside tmux, the prefix key is Ctrl-b by default. Then:
| Keys | Effect |
|---|---|
Ctrl-b d | detach (session keeps running) |
Ctrl-b c | new window (tab) |
Ctrl-b n / p | next / previous window |
Ctrl-b " | split horizontally |
Ctrl-b % | split vertically |
Ctrl-b arrow | move between panes |
Ctrl-b x | kill the current pane |
Ctrl-b [ | scroll mode (q to quit) |
Real situation — you SSH to a server, run
tmux new -s migrate, start a 4-hour database migration, pressCtrl-b d, close your laptop. Three hours later, from a different network,ssh server, thentmux attach -t migrate— the migration is still running, the output is on screen.
11. The Filesystem and Disks
df — disk free per filesystem
df -h # human-readable per filesystem
df -h /var # the filesystem holding /var
Use% near 100 % is the cliff every DevOps incident slides off.
du — disk used per directory
du -sh * # size of each child of the current dir
sudo du -sh /var/* | sort -h # sorted, biggest at the bottom
ncdu — interactive du
ncdu (interactive disk-usage explorer with arrow-key navigation) is the friendly version. apt install ncdu, then ncdu /var.
lsblk — block devices
A block device (a kernel object representing a disk or partition — /dev/sda, /dev/nvme0n1) is what lsblk lists.
lsblk
Shows disks, partitions, sizes, mount points, all in one tree.
mount, umount, /etc/fstab
A mount point (a directory where a filesystem is attached, like /mnt/data) lets a disk show up in your tree. Recognition only:
mount— list everything currently mounted.mount /dev/sdb1 /mnt/data— mount a partition.umount /mnt/data— unmount./etc/fstab(the file that lists what to mount on boot) — edited rarely, with care.
swap and the filesystem types
- swap (a file or partition Linux uses when RAM is full) — see Section 11 of the VPS tutorial for setup.
swapon --showlists active swap. - A filesystem (the on-disk format, like
ext4,xfs,btrfs) decides how data is stored.ext4is the default on Ubuntu;xfsis common on RHEL.
A mini-tour of important directories
| Path | Contents |
|---|---|
/etc/ | system config files |
/var/log/ | logs |
/var/lib/ | application state (databases, queues) |
/home/ | user home directories |
/srv/ | application data, by convention |
/opt/ | optional third-party software |
/tmp/ | temporary files, cleared on reboot |
/usr/local/bin/ | locally-installed scripts |
Real situation — "The server is full and broken." The standard sequence:
df -h— which filesystem is full?sudo du -sh /var/* | sort -h— what is eating it?- If logs:
sudo journalctl --vacuum-size=200M.- If old packages:
sudo apt clean && sudo apt autoremove.
12. Networking from the Shell
curl — the HTTP Swiss army knife
curl makes HTTP requests from the shell. We define the flags worth memorizing.
| Flag | Effect |
|---|---|
-I | only fetch headers (HEAD request) |
-L | follow redirects |
-X POST | choose method |
-d 'k=v' | request body |
-H 'Header: val' | add a header |
-o file | write body to a file |
-u user:pass | basic auth |
-v | verbose (request + response headers) |
--fail | exit non-zero on HTTP error |
-fsS | "fail silently sane" — quiet on success, show errors |
# Health check that fails the script on non-200.
curl --fail -fsS https://tasknote.example.com/health
wget — recursive downloads
wget is older and simpler. Use it for whole-folder downloads or mirroring.
wget -r -np https://example.com/dir/ # recursive, no parent
httpie (http) — human-friendly JSON
http POST tasknote.example.com/api/tasks title=buy-milk
Auto-detects JSON, prints colored output. Nice in interactive use; scripts still tend to use curl.
ss — see what is listening or connected
A listening port (a port a process is bound to, waiting for connections). ss (socket statistics — modern replacement for netstat) is the current default.
sudo ss -tulnp
| Flag | Effect |
|---|---|
-t | TCP |
-u | UDP |
-l | listening only |
-n | numeric (no DNS resolution) |
-p | show the process |
Common tools: ss (modern, replaces netstat), netstat (legacy, recognize only), lsof (open files / ports).
ip — addresses, routes, links
A socket (an endpoint for network communication, identified by IP + port) lives on a network interface managed by ip.
ip a # addresses (replaces ifconfig)
ip r # routes
ip link # interfaces up/down
Common tools: ip (modern, replaces ifconfig), ifconfig (legacy, recognize only).
ping, traceroute, mtr
ping host— basic reachability.traceroute host— list each hop on the path.mtr host—traceroute + pingcontinuously, the diagnostic favorite.
A loopback (the special address 127.0.0.1, also called localhost, that always means "this machine") is what ping 127.0.0.1 hits — useful to confirm the network stack itself works.
DNS from the shell
A DNS (Domain Name System — turns names into IP addresses) lookup answers "where is tasknote.example.com?" Records are mainly A (maps a name to an IPv4 address) and CNAME (maps a name to another name). Each record has a TTL (Time To Live — how long DNS answers are cached).
dig +short tasknote.example.com
dig +short MX example.com
host tasknote.example.com
nslookup is the legacy alternative; recognize only.
nc (netcat) — port test, simple servers
# Is port 5432 open?
nc -zv tasknote-db 5432
-z scan only (no data), -v verbose.
HTTP status code (a 3-digit code returned by an HTTP server: 200 OK, 404 Not Found, 500 Server Error)
You will see them everywhere. The shape: 2xx success, 3xx redirect, 4xx client error, 5xx server error.
TLS / SSL (Transport Layer Security — encrypts traffic between client and server; https:// uses it)
curl https://... validates the server's TLS certificate by default. If you see SSL certificate problem, the server's cert is wrong, expired, or self-signed.
13. SSH and File Transfer
ssh user@host — log in to a remote machine
ssh (Secure Shell — encrypted login to a remote server, on TCP port 22) is how you reach a server.
ssh deploy@tasknote-prod-1
SSH keys
An SSH key (a keypair: a private key kept on your laptop, a public key copied to the server) replaces passwords.
- The private file:
~/.ssh/id_ed25519. Never share. Mode 600. - The public file:
~/.ssh/id_ed25519.pub. Safe to copy anywhere.
ssh-keygen -t ed25519 -C "you@laptop"
ssh-copy-id deploy@tasknote-prod-1
After ssh-copy-id, the public key lands in ~deploy/.ssh/authorized_keys (the server-side list of public keys allowed to log in as that user) on the server.
~/.ssh/config — pays for itself in 10 minutes
A short config file makes daily SSH painless.
# ~/.ssh/config
Host tasknote
HostName tasknote-prod-1.example.com
User deploy
IdentityFile ~/.ssh/id_ed25519
ServerAliveInterval 60
Host bastion
HostName bastion.example.com
User ops
Host tasknote-db
HostName 10.0.5.20
User deploy
ProxyJump bastion
After this, ssh tasknote is enough.
Useful SSH flags
ssh -i path/to/key user@host— pick a specific key.ssh -p 2222 user@host— non-default port.ssh -L 5432:localhost:5432 user@host— local port forward (opens a port on your laptop that tunnels to a port on the remote). Useful for reaching a remote DB through an SSH tunnel.ssh -R 8080:localhost:8080 user@host— remote port forward (the reverse).ssh -A user@host— agent forwarding (forwards your local ssh-agent so the remote can use your keys). Useful but risky on untrusted hosts.
ssh-agent and ssh-add
The ssh-agent (a small program that holds decrypted keys in memory so you do not type your passphrase every time) is started by your shell or desktop session.
ssh-add # add the default key (asks for passphrase once)
ssh-add ~/.ssh/id_deploy # add a specific key
ssh-add -l # list loaded keys
scp and rsync
scp file user@host:/path/— quick one-off copy.rsync -avz --progress source/ user@host:/dest/— the deploy workhorse.
rsync flags:
| Flag | Effect |
|---|---|
-a | archive (preserve everything) |
-v | verbose |
-z | compress on the wire |
--progress | progress bar |
--delete | mirror — delete files at dest that no longer exist at source |
--dry-run | show what would change without changing it |
Destructive command warning.
rsync --deleteremoves files at the destination. Always run with--dry-runfirst.
known_hosts — what it is
The known_hosts file (~/.ssh/known_hosts — public keys of servers your machine has connected to before) is how SSH detects man-in-the-middle changes. The first time you SSH to a host, you confirm the fingerprint and it is recorded. If the server's key changes (rebuild, new IP), you get host key verification failed and must remove the old line: ssh-keygen -R hostname.
14. Package Management
Debian / Ubuntu — apt
apt is the modern, friendly front-end. Older scripts use apt-get and apt-cache; same job, less polished.
sudo apt update # refresh package lists
sudo apt upgrade # install available upgrades
sudo apt install htop # install a package
sudo apt remove htop # remove (keeps config)
sudo apt purge htop # remove + delete config
sudo apt search htop # search the index
apt show htop # show metadata
apt list --installed | grep nginx # what's installed
For low-level work on .deb files:
sudo dpkg -i pkg.deb— install a downloaded.deb.dpkg -l | grep nginx— list installed packages.dpkg -L nginx— list files installed by a package.
snap
snap ships self-contained packages with their own dependencies. Recognize: snap install <name>. Common for cross-distro tools.
RHEL family — one paragraph
On RHEL, AlmaLinux, Rocky Linux, replace apt with dnf (modern package manager, replaces yum). The verbs are the same: dnf install, dnf update, dnf remove, dnf search. Older docs say yum; same idea. Low-level: rpm -i, rpm -qa.
Cross-cutting — language package managers
pip, npm, gem, cargo are language-specific package managers. They install language libraries, not OS packages. Do not confuse:
- TaskNote's
npm install expressputsexpressundernode_modules/, not onapt. apt install python3-pipinstallspipitself, thenpip install requestsis a Python install.
15. systemd and Services
systemd is the most useful concept after files. On Ubuntu, systemd (the modern Linux init system and service manager) runs every daemon, every boot.
A .service unit — anatomy
A systemd service unit (a small text file in /etc/systemd/system/ that tells systemd how to run a program) for TaskNote:
# /etc/systemd/system/tasknote.service
[Unit]
Description=TaskNote API
After=network.target postgresql.service
[Service]
Type=simple
User=deploy
WorkingDirectory=/srv/tasknote
EnvironmentFile=/srv/tasknote/.env
ExecStart=/usr/bin/node /srv/tasknote/server.js
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
The target (a systemd unit that groups others — multi-user.target is "boot finished, services up") under WantedBy controls when the service starts.
systemctl — control services
sudo systemctl status tasknote # is it running? recent logs.
sudo systemctl start tasknote
sudo systemctl stop tasknote
sudo systemctl restart tasknote
sudo systemctl reload tasknote # if the unit defines reload
sudo systemctl enable tasknote # start on boot
sudo systemctl disable tasknote
sudo systemctl daemon-reload # after editing a unit file
journalctl — logs from systemd
The journal (systemd's binary log database, read with journalctl) holds every service's stdout/stderr.
journalctl -u tasknote # all logs for the unit
journalctl -u tasknote -n 200 # last 200 lines
journalctl -u tasknote -f # live tail
journalctl -u tasknote --since "10 min ago"
journalctl -u tasknote --since "2026-05-01" --until "2026-05-02"
journalctl -u tasknote -p err # only error level and worse
journalctl -k # kernel messages
systemd timers — modern cron alternative
A systemd timer (a .timer unit that triggers a .service on a schedule) is the modern replacement for cron. Two files:
# /etc/systemd/system/tasknote-backup.service
[Service]
Type=oneshot
ExecStart=/usr/local/bin/backup-tasknote.sh
User=deploy
# /etc/systemd/system/tasknote-backup.timer
[Timer]
OnCalendar=daily
Persistent=true
[Install]
WantedBy=timers.target
Then sudo systemctl enable --now tasknote-backup.timer. Compared to cron, timers integrate with journalctl, can depend on other units, and survive reboots better.
Real situation — "The app crashed after a deploy." Run
systemctl status tasknoteto see the most recent failure. Thenjournalctl -u tasknote -n 200to see why. Restart only after you read the error.
16. Logs — Where Things Live and How to Read Them
journalctl — first stop on a modern Ubuntu
We covered the flags above. It is the central log for systemd-managed services.
/var/log/ — the traditional location
Files written by daemons that do their own logging:
| File / dir | Contents |
|---|---|
/var/log/syslog | general system log |
/var/log/auth.log | logins, sudo, SSH |
/var/log/kern.log | kernel messages |
/var/log/dpkg.log | package install/remove history |
/var/log/nginx/ | Nginx access and error logs |
/var/log/tasknote/ | by convention if app logs to a file |
Live tail and multitail
tail -f /var/log/syslog— one file, live.multitail file1 file2— (installapt install multitail) multiple files in panes.
dmesg — kernel messages
dmesg --ctime | tail
When the disk has I/O errors or a USB device misbehaves, dmesg is where you look.
Log rotation
A log rotation tool (typically logrotate, runs daily, compresses and trims old log files) keeps /var/log/ from filling. Configs live in /etc/logrotate.d/. Recognition only.
17. Archives, Compression, and Scheduling
tar — the two patterns you use
tar (tape archive — bundles files into one .tar, often piped through gzip) is everywhere. Two patterns cover 95 % of cases.
# Create a .tar.gz from a directory.
tar -czf tasknote-backup.tgz /srv/tasknote
# Extract a .tar.gz.
tar -xzf tasknote-backup.tgz
# Peek without extracting.
tar -tzf tasknote-backup.tgz
| Flag | Effect |
|---|---|
-c | create |
-x | extract |
-t | list |
-z | gzip |
-J | xz (smaller, slower) |
-f file | archive name |
-v | verbose |
-C dir | change to dir before doing it |
Compressors
gzip file— replacesfilewithfile.gz.gunzipreverses.xz file— smaller files, slower.unxzreverses.bzip2 file— middle ground, less common today.zstd— modern, fast, good ratio (newer Ubuntu).
zip / unzip — for cross-platform
zip -r site.zip site/
unzip site.zip
Use when sharing with Windows users.
Cron — the classic scheduler
A cron (a daemon that runs jobs on a schedule) job lives in a crontab (per-user list of scheduled jobs).
crontab -e # edit your user's crontab
crontab -l # show it
sudo crontab -e # root's crontab
A cron expression (the five fields: minute, hour, day-of-month, month, day-of-week) is read left to right.
# m h dom mon dow command
15 3 * * * /usr/local/bin/backup-tasknote.sh # 03:15 every day
0 */2 * * * /usr/local/bin/health-check.sh # every 2 hours
30 1 * * 0 /usr/local/bin/weekly-cleanup.sh # 01:30 every Sunday
*/5 * * * * /usr/local/bin/heartbeat.sh # every 5 minutes
Other tools:
- anacron (catches missed cron runs after the machine was off) — laptops and servers that sleep.
- systemd timers (Section 15) — modern alternative.
18. Editor Survival
You will edit configs on a remote server. You cannot escape vim.
vim — 15-minute survival kit
vim (or vi) is modal (it has separate modes for moving, inserting, and giving commands). The Esc reflex is the most important habit.
| State | Keys |
|---|---|
| Open file | vim path/to/file |
| Enter insert mode (type text) | i |
| Leave insert mode | Esc |
| Save | :w then Enter |
| Save and quit | :wq |
| Quit without saving (panic button) | :q! |
| Move | arrows, or h j k l |
| Delete one line | dd |
| Yank (copy) one line | yy |
| Paste | p |
| Undo | u |
| Redo | Ctrl-r |
| Search forward | /pattern then n for next |
| Replace all in file | :%s/old/new/g |
Real situation — you SSH in, run
sudo vim /etc/nginx/nginx.conf, change a line, type:wq. If you panic,:q!is the eject button — quit without saving.
nano — five minutes
For people who refuse vim. nano file opens. Keys are listed at the bottom: Ctrl-O save, Ctrl-X exit, Ctrl-W search.
EDITOR
crontab -e, git commit, visudo all open whichever editor is in the EDITOR environment variable (a key=value pair available to programs you run; viewed with env, printenv).
echo 'export EDITOR=vim' >> ~/.bashrc
Or nano if that is your choice.
19. Bash Scripting Essentials
This is the section that turns a command user into a DevOps engineer.
The shebang
A shebang (the first line of a script — #!/usr/bin/env bash — tells the kernel which interpreter to run) makes the script directly executable.
#!/usr/bin/env bash
echo "hello"
chmod +x script.sh, then ./script.sh runs it.
set -euo pipefail — the safety net
Every script starts with this line. We define each piece.
set -e(exit immediately on any non-zero exit code) — the script stops the moment a command fails.set -u(treat unset variables as an error) — typos in variable names blow up loudly instead of silently expanding to empty strings.set -o pipefail(a pipeline fails if any command in it fails) — without this,wrong-command | truesucceeds.
#!/usr/bin/env bash
set -euo pipefail
Variables
- Assignment: no spaces around
=—NAME=value. - Read:
$NAMEor${NAME}. - Default:
${NAME:-default}— usedefaultifNAMEis unset or empty.
APP="tasknote"
HOME_DIR="/srv/$APP"
LOG_FILE="${LOG_FILE:-/var/log/$APP.log}"
Special variables
| Var | Meaning |
|---|---|
$0 | script name |
$1, $2 … $9 | first, second … positional argument |
$# | number of arguments |
$@ | all arguments (use as "$@" to preserve spaces) |
$? | exit code of the last command |
$$ | current PID |
An exit code (the integer a program returns when it finishes — 0 means success, anything else is a failure) is the basis for set -e and for chaining commands with && (run next if previous succeeded) and || (run next if previous failed).
Command substitution
A command substitution ($(cmd) — runs the command and replaces itself with its stdout) is how you capture output.
DATE=$(date +%F)
COMMIT=$(git rev-parse --short HEAD)
Don't confuse with… backticks
`cmd`— same idea, legacy syntax. Use$(...).
Quoting — the bug factory
- Single quotes: literal, no expansion.
'$HOME'is the literal six characters. - Double quotes: variables expand.
"$HOME"becomes/home/deploy. - No quotes: word-split on spaces. Almost always wrong with paths.
Why it matters: unquoted variables are the leading source of shell bugs.
rm -rf $DIR/withDIR=""becomesrm -rf /— a famous disaster.
Always quote: "$VAR", "$@", "$(cmd)".
Conditionals
if [[ -f /srv/tasknote/.env ]]; then
echo "env file present"
elif [[ -d /srv/tasknote ]]; then
echo "dir present, env missing"
else
echo "nothing here"
fi
Common tests:
| Test | Meaning |
|---|---|
-f path | path is a regular file |
-d path | path is a directory |
-z str | string is empty |
-n str | string is non-empty |
== / != | string equality |
-eq / -ne / -lt / -gt | integer comparison |
Loops
for f in /var/log/*.log; do
echo "rotating $f"
done
i=0
while [[ $i -lt 5 ]]; do
echo "i=$i"
i=$((i + 1))
done
case statements
case "$1" in
start) systemctl start tasknote ;;
stop) systemctl stop tasknote ;;
restart) systemctl restart tasknote ;;
*) echo "usage: $0 {start|stop|restart}"; exit 64 ;;
esac
Functions
log() {
local level="$1"; shift
printf '[%s] %s %s\n' "$(date +%FT%T)" "$level" "$*"
}
log INFO "starting deploy"
local keeps a variable inside the function.
Heredocs
A here-document (<<EOF block — feeds multi-line text as stdin). Quote the marker as <<'EOF' to suppress variable expansion.
psql -U app tasknote <<EOF
ALTER TABLE tasks ADD COLUMN priority int DEFAULT 0;
EOF
trap — cleanup on exit or interrupt
trap (register a command to run on a signal or on script exit) makes scripts safe to interrupt.
TMP=$(mktemp -d)
trap 'rm -rf "$TMP"' EXIT # run on any exit, success or failure
# ... use $TMP ...
shellcheck — run on every script
shellcheck script.sh (a static analyzer that catches shell bugs early — quoting, unset variables, common pitfalls) finds the bugs you would otherwise discover at 3 a.m.
Install: sudo apt install shellcheck. Run before every commit.
Idempotent (running it twice produces the same result as once)
A good script is idempotent. mkdir -p instead of mkdir. systemctl enable --now instead of enable && start (which fails if already enabled).
Dry run (a flag that prints what the script would do without actually doing it)
Add --dry-run to dangerous scripts. rsync already has it. Build it into your own.
Golden rules of shell scripts
- Start with
#!/usr/bin/env bashandset -euo pipefail. - Quote every variable:
"$VAR". - Use
$(cmd), not backticks. - Be idempotent where possible.
- Add a
--dry-runflag for destructive operations. - Log what you do, with timestamps.
- Never
rm -rf "$VAR/"without checking$VARis set and non-empty. trapcleanup commands so interrupted runs do not leave a mess.- Set timeouts on network calls (
curl --max-time 10). - Run
shellcheckon every script you commit.
20. Worked Example — An Annotated Deploy Script
A real deploy script for TaskNote. Every line carries a comment in plain language explaining what it does and why. Every concept here was introduced in sections 3–19.
#!/usr/bin/env bash
# Tells the kernel to run this file with bash from PATH.
set -euo pipefail
# -e exit on first error, -u treat unset vars as error, pipefail catch failures inside pipes.
# ----- arguments -----
APP="tasknote"
RELEASE_TAG="${1:-}"
# First CLI arg becomes RELEASE_TAG. ${1:-} avoids "unset" with set -u.
if [[ -z "$RELEASE_TAG" ]]; then
echo "usage: $0 <release-tag>"; exit 64
# exit 64 is conventional for "command line usage error".
fi
# ----- paths -----
RELEASES_DIR="/srv/$APP/releases"
RELEASE_DIR="$RELEASES_DIR/$RELEASE_TAG"
CURRENT_LINK="/srv/$APP/current"
LOG="/var/log/$APP/deploy-$(date +%Y%m%d-%H%M%S).log"
# Log file with a timestamp so we never overwrite a previous run.
# ----- cleanup on interrupt -----
TMP=$(mktemp -d)
trap 'rm -rf "$TMP"' EXIT
# Whatever happens, the temp dir is wiped on exit.
log() {
printf '[%s] %s\n' "$(date +%FT%T)" "$*" | tee -a "$LOG"
# tee writes to the log AND to stdout so a human watching can see progress.
}
# ----- precondition: release directory must exist -----
if [[ ! -d "$RELEASE_DIR" ]]; then
log "ERROR: release dir $RELEASE_DIR does not exist"; exit 1
fi
# ----- write the release marker -----
log "deploying $APP release $RELEASE_TAG"
echo "$RELEASE_TAG" > "$RELEASE_DIR/.release"
# > overwrites silently — safe here because the file is just a marker.
# ----- install dependencies (idempotent) -----
log "installing node dependencies"
( cd "$RELEASE_DIR" && npm ci --omit=dev ) >> "$LOG" 2>&1
# Subshell ( ... ) so the `cd` does not leak. >> appends; 2>&1 merges stderr into stdout.
# ----- migrate the database (heredoc) -----
log "running migrations"
sudo -u postgres psql -d "$APP" <<SQL >> "$LOG" 2>&1
\set ON_ERROR_STOP true
BEGIN;
\i $RELEASE_DIR/migrations/run.sql
COMMIT;
SQL
# Heredoc feeds the multi-line SQL block as psql's stdin.
# ----- atomic switch of the 'current' symlink -----
ln -sfn "$RELEASE_DIR" "$CURRENT_LINK"
# -s symlink, -f force, -n do not follow if target is a dir. Atomic on POSIX.
# ----- restart the service -----
log "restarting $APP"
sudo systemctl restart "$APP"
# ----- wait for the service to become ready -----
for i in 1 2 3 4 5; do
if curl --fail --max-time 5 -fsS http://127.0.0.1:8080/health > /dev/null; then
log "health check passed on attempt $i"
break
fi
log "health check attempt $i failed, retrying..."
sleep 2
if [[ $i -eq 5 ]]; then
log "ROLLBACK: health check failed 5 times, reverting"
PREV=$(ls -1t "$RELEASES_DIR" | sed -n '2p')
ln -sfn "$RELEASES_DIR/$PREV" "$CURRENT_LINK"
sudo systemctl restart "$APP"
exit 1
fi
done
# ----- notify a webhook -----
WEBHOOK="${DEPLOY_WEBHOOK:-}"
# ${VAR:-} so we do not blow up under set -u if it is unset.
if [[ -n "$WEBHOOK" ]]; then
curl --max-time 5 -fsS -X POST -H 'Content-Type: application/json' \
-d "{\"text\":\"$APP $RELEASE_TAG deployed\"}" "$WEBHOOK" >> "$LOG" 2>&1 || true
# `|| true` so a webhook outage does not fail the deploy.
fi
log "deploy of $APP $RELEASE_TAG complete"
Read it once with the comments, then once with the comments hidden. Every concept — set -euo pipefail, trap, heredoc, command substitution, conditional, loop, tee, curl --fail, systemctl, atomic symlink — has a section earlier in this tutorial.
Run shellcheck deploy.sh before you commit.
21. Common DevOps One-Liners — Cheat Sheet
A scannable reference. Each entry: the job, the command, one sentence about what the pipe does.
Logs and traffic
- Top 10 IPs hitting the server —
awk '{print $1}' access.log | sort | uniq -c | sort -rn | head— column 1, group, count, biggest first, top 10. - HTTP status code mix —
awk '{print $9}' access.log | sort | uniq -c | sort -rn— which status codes dominate. - Slowest 20 requests —
awk '{print $NF, $7}' access.log | sort -rn | head -20— last column (time), then path, biggest first. - Errors in TaskNote in the last hour —
journalctl -u tasknote --since "1 hour ago" -p err. - Tail multiple service logs at once —
journalctl -u nginx -u tasknote -f.
Disk and files
- Free up biggest files anywhere —
sudo du -ah / 2>/dev/null | sort -rh | head -20— bytes-recursive, biggest first, top 20. - Biggest folders under /var —
sudo du -sh /var/* 2>/dev/null | sort -h— child sizes, sorted. - Files older than 30 days under /backups —
find /backups -type f -mtime +30— list before deleting. - Delete files older than 30 days —
find /backups -type f -mtime +30 -delete— (destructive — preview without-deletefirst.) - Rotate-then-delete a runaway log —
: > /var/log/runaway.log— empties without deleting the inode.
Network and processes
- What is on port 80 —
sudo ss -tulnp | grep :80. - Processes using a given file —
sudo lsof /var/log/tasknote.log. - What process holds port 5432 —
sudo ss -tulnp | grep :5432. - Quick connectivity test to a DB —
nc -zv tasknote-db 5432. - Resolve a hostname —
dig +short tasknote.example.com.
Containers and Kubernetes
- All containers, including stopped —
docker ps -a. - Tail logs of a docker container —
docker logs -f <name>. - Names of running pods —
kubectl get pods -o json | jq -r '.items[].metadata.name'. - Pods not in Running state —
kubectl get pods --field-selector=status.phase!=Running. - Restart a Kubernetes deployment —
kubectl rollout restart deployment/<name>.
Security and SSH
- Public key fingerprint —
ssh-keygen -lf ~/.ssh/id_ed25519.pub. - Recent SSH logins —
sudo grep 'Accepted' /var/log/auth.log | tail. - Active SSH connections —
whoandlast | head.
Misc
- Generate a random password —
openssl rand -base64 24. - Find a binary you forgot the name of —
compgen -c | grep -i task. - Watch a command refresh every 2 seconds —
watch -n 2 'systemctl is-active tasknote'. - Time a command —
time long-job.sh.
22. Anti-Patterns and Common Traps
Short, scannable. Avoid each one.
- Running scripts you copy-pasted without reading them. Read before you execute, especially anything with
sudo,rm, or>. curl | bashfrom random sites. You are running unaudited code as your user. Save to a file, read it, then run.rm -rf "$VAR/"without checking$VARis set and non-empty. Famous disaster vector. Always:[[ -n "$VAR" ]] || exit 1.- Ignoring exit codes — chaining
cmd1; cmd2when you meantcmd1 && cmd2. Semicolon runs both regardless.&&only runs the second if the first succeeded. - Editing
/etc/sudoersdirectly instead ofvisudo. A syntax error locks you out. - Saving secrets in
.bash_history. Anything youexport PASSWORD=...lands there. Use a.envfile or a secret manager. chmod 777to "fix permissions." Hides the real problem (wrong owner) and makes the file world-writable. Diagnose withls -l.- Storing dotfile edits only on the server. Lose the box, lose the config. Keep
.bashrc,.tmux.conf,.vimrcin Git. - Pinning tool versions to
:latestin scripts. Surprise breakage on the next pull. Pin versions or commits. - Not running
shellcheckon scripts you commit. It catches the same handful of mistakes humans repeat forever. - Never quoting variables.
"$VAR"is almost always right;$VARis almost always wrong. - Using
catto send a 2 GB log to the screen. Useless,head,tail, or pipe throughgrep.
23. What to Read Next
A tight list to grow beyond this tutorial.
- The Linux Command Line — William Shotts. Free PDF. The canonical hand-holding tour.
- explainshell.com — paste any command, get every flag explained. Bookmark it.
- Greg's Wiki / BashFAQ —
mywiki.wooledge.org/BashFAQ. Answers the questions you are about to ask. - Bash Pitfalls —
mywiki.wooledge.org/BashPitfalls. The bugs you will write at least once. - The official
manpages and--help. They are dense, but they are authoritative. tldr/tealdeer. Community examples of every common command.shellcheckdocs —shellcheck.net/wiki. Every warning has a page explaining why.- Unix Power Tools — Jerry Peek et al. Older but still gold.
24. Glossary
Shell concepts
- alias — a short name for a longer command, set with
alias ll='ls -lah'. - argument — a value you pass to a command after the command name.
- bash — Bourne-Again SHell, the default scripting shell on Linux.
- command — a program you can run from the shell.
- command substitution —
$(cmd)runscmdand replaces itself with its output. - current working directory — the folder the shell is "in", shown by
pwd. - environment variable — a key=value pair available to programs you run.
- exit code — integer a program returns when it finishes; 0 success, anything else failure.
- flag / option — a switch passed to a command, like
-l. - function — a block of code defined and called by name within a shell.
- here-document —
<<EOF-block that feeds multi-line text as stdin. - hidden file (dotfile) — file whose name starts with
.; not shown by default. - home directory — your personal folder, also written
~. - PATH — list of directories the shell searches for executables.
- pipe (
|) — connects stdout of one command to stdin of the next. - prompt — the text the shell prints before each command.
- redirect — sends stdin/stdout/stderr to a file (
>,>>,<,2>,&>). - shebang — first line
#!/usr/bin/env bashselecting the interpreter. - shell — program that reads what you type and runs other programs.
- short flag (-l) vs long flag (--list) — single-letter form vs whole-word form of the same option.
- subshell — child shell that runs a group of commands; changes do not leak out.
- tab completion — pressing Tab to complete commands and file names.
- terminal — the window or app you type into; not the shell itself.
- variable — a named value, set with
NAME=value, read with$NAME. - wildcard / glob —
*,?,[abc],{a,b}patterns expanded by the shell. - zsh — Z Shell, friendlier interactive shell, default on macOS.
Filesystem concepts
- /etc/fstab — file listing what to mount on boot.
- absolute path vs relative path — starts at
/vs interpreted frompwd. - block device — kernel object representing a disk or partition.
- filesystem — on-disk format like ext4, xfs, btrfs.
- group — a set of users; files have an owner and a group.
- hard link — second name for the same inode.
- inode — on-disk record holding a file's metadata.
- mount point — directory where a filesystem is attached.
- octal mode — three-digit number encoding rwx bits, e.g. 755.
- owner — the user account that owns a file.
- permissions (rwx) — read, write, execute bits for owner/group/others.
- setuid — bit making an executable run as its owner.
- sticky bit — directory bit (on
/tmp) preventing users from deleting each other's files. - swap — file or partition Linux uses when RAM is full.
- symlink (soft link) — file that points by name to another path.
- umask — permission bits stripped from new files by default.
Process concepts
- daemon — long-running background process.
- foreground vs background — shell waits / shell does not wait.
- job — backgrounded task in your shell, listed by
jobs. - nice / renice — change a process's CPU priority.
- nohup — run a job that survives logout.
- parent process — the process that started this one.
- PID (Process ID) — number identifying a running process.
- process — a running program.
- signal — kernel message to a process; SIGTERM, SIGKILL, SIGHUP, SIGINT.
- terminal multiplexer — tmux/screen, runs sessions you can detach from.
Networking concepts
- A record — DNS entry mapping a name to an IPv4 address.
- authorized_keys — server-side list of public keys allowed to log in.
- CNAME — DNS entry mapping a name to another name.
- DNS — Domain Name System; turns names into IP addresses.
- HTTP status code — 3-digit code returned by an HTTP server.
- known_hosts — local list of known server public keys.
- listening port — port a process is bound to, waiting for connections.
- loopback — special address
127.0.0.1, also called localhost. - port — numbered network channel on an address.
- socket — endpoint for network communication, IP + port.
- SSH key — keypair: private file local, public file installed on servers.
- TLS / SSL — Transport Layer Security; encrypts traffic.
- TTL — Time To Live; how long a DNS answer is cached.
Service & log concepts
- cron — daemon that runs jobs on a schedule.
- cron expression — five-field schedule (minute, hour, day-of-month, month, day-of-week).
- crontab — per-user list of scheduled cron jobs.
- journalctl — command to query the systemd journal.
- systemd — modern Linux init system and service manager.
- systemd service unit —
.servicefile describing how to run a program. - systemd timer —
.timerunit triggering a service on a schedule. - target — systemd unit grouping others;
multi-user.targetis "boot done".
Scripting concepts
- dry run — flag printing what a script would do without doing it.
- idempotent — running it twice produces the same result as once.
- linting (shellcheck) — static analysis catching shell bugs early.
- set -e — exit immediately on any non-zero exit code.
- set -u — treat unset variables as errors.
- set -o pipefail — pipeline fails if any stage fails.
- trap — register a command to run on a signal or on exit.
Tools (alphabetical)
- anacron — catches missed cron runs after the machine was off.
- apt — modern Debian/Ubuntu package manager.
- apt-get / apt-cache — older script-friendly Debian/Ubuntu tools.
- awk — field-oriented mini-language for text processing.
- bash — default scripting shell, everywhere.
- bat — syntax-highlighted modern alternative to
cat/less. - btop — prettier modern process viewer.
- bzip2 — older compressor, between gzip and xz.
- cargo — Rust language package manager.
- command -v — script-friendly check whether a command exists.
- croc — easy peer-to-peer file transfer.
- curl — default HTTP client, scriptable.
- cut — slice columns out of each line.
- df — filesystem-level disk usage.
- dig — detailed DNS lookup.
- dnf — modern RHEL-family package manager, replaces yum.
- dpkg — low-level Debian package tool.
- du — directory-level disk usage.
- fd — modern, faster, friendlier
find. - find — built-in, expressive file search.
- fish — beginner-friendly shell, not script-portable.
- fx — interactive JSON viewer.
- gem — Ruby language package manager.
- glances — broad system view in one screen.
- grep — default text search.
- gzip / gunzip — gzip compressor / decompressor.
- host — simple DNS lookup.
- htop — interactive, colored process viewer.
- httpie — human-friendly JSON HTTP client.
- ifconfig — legacy network-interface tool, recognize only.
- info — GNU long-form manuals.
- ip — modern network tool, replaces ifconfig.
- jq — JSON queries.
- less — default pager, scroll up and down.
- locate / mlocate — instant indexed file search.
- lsblk — list block devices.
- lsof — list open files and ports.
- man — manual pages.
- micro — modern, intuitive editor.
- more — legacy pager, simpler than less.
- mosh — resilient SSH alternative for flaky networks.
- mtr — traceroute + ping continuously.
- nano — beginner-friendly editor.
- ncdu — interactive
du. - nc (netcat) — port checks, simple servers.
- netstat — legacy port-listing tool, recognize only.
- npm — Node.js language package manager.
- nslookup — legacy DNS lookup, recognize only.
- OpenSSH — default SSH server and client on Linux.
- pip — Python language package manager.
- ping — basic network reachability test.
- printf — formatted output, prefer in scripts where format matters.
- ps — process snapshot.
- rg / ripgrep — faster grep that respects
.gitignore. - rpm — low-level RHEL-family package tool.
- rsync — sync, resume, dry-run; deploy/backup workhorse.
- scp — simple copy over SSH.
- screen — legacy terminal multiplexer.
- sed — stream editor for substitutions and small edits.
- service — legacy wrapper around init scripts.
- sftp — interactive file transfer over SSH.
- shellcheck — catches bash bugs early.
- snap — self-contained packages with their own deps.
- sort — order lines.
- ss — modern port listing, replaces netstat.
- systemctl — control systemd services.
- tar — bundles files into one archive.
- tealdeer / tldr — community examples, fast.
- tee — write and pass through.
- tmux — modern, scriptable terminal multiplexer.
- top — built-in process viewer.
- tr — translate / squeeze / delete characters.
- traceroute — list each hop on the network path.
- uniq — collapse adjacent duplicate lines.
- vim / neovim — modal, powerful, ubiquitous editors.
- watch — re-run a command every N seconds.
- wc — count lines, words, bytes.
- wget — downloads, recursive.
- xargs — turn lines into arguments.
- xz — strong compressor, slower than gzip.
- yq — YAML queries.
- yum — legacy RHEL package manager, recognize only.
- zellij — newer, friendlier terminal multiplexer.
- zip / unzip — cross-platform archive.