AWK one-liners are the most immediately useful form of AWK knowledge — compact expressions that solve real problems without writing a full program. This reference collects the most valuable patterns, organized by task type, with explanations of how each one works.
1
Line selection and filtering
AWK
# ── Line selection ────────────────────────────────────────
awk 'NR==5' # print only line 5
awk 'NR>=5 && NR<=10' # print lines 5-10
awk 'NR%2==0' # print even-numbered lines
awk 'NR%2==1' # print odd-numbered lines
awk 'END{print NR}' # print total line count (like wc -l)
awk 'NR==1' # print first line (like head -1)
awk 'END{print}' # print last line (like tail -1)
# ── Pattern filtering ─────────────────────────────────────
awk '/ERROR/' # lines containing ERROR (like grep)
awk '!/ERROR/' # lines NOT containing ERROR (like grep -v)
awk '/ERROR/ && /mysql/' # lines with both patterns
awk '/ERROR/ || /FATAL/' # lines with either pattern
awk '$2 ~ /^[0-9]+$/' # field 2 matches numeric regex
awk '$1 !~ /^#/' # field 1 does not start with #
# ── Numeric filtering ─────────────────────────────────────
awk '$3 > 80' # field 3 greater than 80
awk '$3 > 80 && $4 < 10' # compound numeric condition
awk 'NF > 3' # lines with more than 3 fields
awk 'NF == 0' # empty lines
awk 'length > 80' # lines longer than 80 chars
# ── Range patterns ────────────────────────────────────────
awk '/START/,/END/' # lines between START and END (inclusive)
awk '/START/,/END/{if(!/START/&&!/END/)print}' # exclusive
2
Field manipulation and text transformation
AWK
# ── Field printing ────────────────────────────────────────
awk '{print $1}' # first field (like cut -f1)
awk '{print $NF}' # last field
awk '{print $(NF-1)}' # second to last field
awk '{print $2,$4}' # fields 2 and 4 with space
awk '{$1=$2=""; print}' # remove first two fields
awk '{$2="REDACTED"; print}' # replace field 2
awk 'BEGIN{OFS=","} {print $1,$3,$5}' # CSV output
# ── Reverse and reorder fields ────────────────────────────
awk '{for(i=NF;i>0;i--) printf "%s%s",$i,(i>1?OFS:ORS)}' # reverse
awk '{print $3,$1,$2}' # reorder: fields 3,1,2
# ── Text transformation ───────────────────────────────────
awk '{print toupper($0)}' # uppercase entire line
awk '{print tolower($1)}' # lowercase field 1
awk '{gsub(/^ +| +$/, ""); print}' # trim whitespace
awk '{gsub(/ +/, "\t"); print}' # spaces to tabs
awk '{gsub(/\t/, ","); print}' # TSV to CSV
awk '{gsub(/,/, "\t"); print}' # CSV to TSV
# ── Add line numbers ──────────────────────────────────────
awk '{print NR": "$0}' # number each line
awk '{printf "%4d %s\n", NR, $0}' # right-aligned line numbers
3
Aggregation and deduplication
AWK
# ── Deduplication ─────────────────────────────────────────
awk '!seen[$0]++' # unique lines, preserve order
awk '!seen[$1]++' # unique on field 1
awk '!seen[$2,$4]++' # unique on fields 2+4
# ── Sum and count ─────────────────────────────────────────
awk '{s+=$1} END{print s}' # sum field 1
awk '{s+=$1} END{print s/NR}' # average field 1
awk 'BEGIN{m=0} $1>m{m=$1} END{print m}' # max
awk '{if(NR==1||$1 # min
# ── Word count ────────────────────────────────────────────
awk '{ for(i=1;i<=NF;i++) wc[$i]++ }
END { for(w in wc) print wc[w], w }' # word frequency table
# ── Group by count ────────────────────────────────────────
awk '{c[$1]++} END{for(k in c) print c[k],k}' | sort -rn
# ── Merge adjacent duplicate lines ────────────────────────
awk '$0!=prev{print; prev=$0}' # uniq without sorting
# ── Print unique count ────────────────────────────────────
awk '!seen[$0]++{n++} END{print n, "unique lines"}'
4
System administration tasks
AWK
# ── /etc/passwd processing ────────────────────────────────
awk -F':' '{print $1}' /etc/passwd # list all users
awk -F':' '$3>=1000{print $1,$6}' /etc/passwd # human users + home
awk -F':' '$7=="/bin/bash"' /etc/passwd # bash users only
# ── Process table ─────────────────────────────────────────
ps aux | awk 'NR>1 && $3>5.0 {print $1,$11,$3"%"}' # high CPU
ps aux | awk '{sum+=$3} END{printf "Total CPU: %.1f%%\n",sum}'
ps aux | awk 'NR>1{mem[$1]+=$4} END{for(u in mem) print u,mem[u]"%"}'
# ── Disk usage ────────────────────────────────────────────
df -h | awk 'NR>1{gsub(/%/,"",$5); if($5>80) print "ALERT",$5"%",$6}'
# ── Network ───────────────────────────────────────────────
netstat -nt | awk '$6=="ESTABLISHED"{c[$5]++} END{for(k in c)print c[k],k}'
ss -nt | awk 'NR>1 && $1=="ESTAB"{split($5,a,":");c[a[1]]++}
END{for(k in c) print c[k],k}'
# ── Remove comment lines and blank lines ──────────────────
awk '!/^[[:space:]]*#/ && NF' # useful for config files
# ── Extract IPs ───────────────────────────────────────────
awk 'match($0,/[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+/)
{print substr($0,RSTART,RLENGTH)}'
vriddh@prod-01:~$awk -F: '$3>=1000{print $1,$6}' /etc/passwd
vriddh /home/vriddh
deploy /home/deploy
vriddh@prod-01:~$df -h | awk 'NR>1{gsub(/%/,"",$5); if($5>70) print "WARN",$5"%",$6}'
WARN 78% /
WARN 91% /data
vriddh@prod-01:~$awk '!seen[$0]++' dupes.txt | wc -l
842
█
✔ One-liner mastery —
!seen[$0]++ is the most elegant dedup in any language. END{print NR} beats wc -l when AWK is already processing the file. Use $0!=prev{print;prev=$0} for run-length style dedup without sorting. Always prefer AWK over multiple pipes when one AWK program can do the job — fewer processes, faster execution.