0
0
Linux CLIscripting~20 mins

Why text processing is Linux's superpower in Linux CLI - Challenge Your Understanding

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Linux Text Processing Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
💻 Command Output
intermediate
2:00remaining
What is the output of this command pipeline?
Consider the following Linux command pipeline that processes a text file named data.txt:

cat data.txt | grep -i 'error' | sort | uniq -c

What does this command output?
Linux CLI
cat data.txt | grep -i 'error' | sort | uniq -c
AThe total number of lines in data.txt containing 'error' (case-insensitive).
BAll lines from data.txt sorted alphabetically, ignoring case.
COnly lines containing the word 'error' in uppercase, unsorted.
DA list of unique lines containing 'error' (case-insensitive) with their counts, sorted alphabetically.
Attempts:
2 left
💡 Hint
Think about what each command in the pipeline does: grep filters lines, sort orders them, and uniq -c counts duplicates.
🧠 Conceptual
intermediate
1:30remaining
Why is text processing considered Linux's superpower?
Which of the following best explains why text processing tools are considered a superpower in Linux?
ABecause Linux uses text files for configuration, logs, and communication, making text tools essential for automation and troubleshooting.
BBecause Linux only supports text files and cannot handle binary files.
CBecause Linux commands cannot process anything other than text data.
DBecause Linux requires users to manually edit all files without automation.
Attempts:
2 left
💡 Hint
Think about how Linux uses text files in daily system tasks.
🔧 Debug
advanced
2:00remaining
Identify the error in this text processing command
The user wants to extract all lines containing the word 'fail' (case-insensitive) from log.txt and save unique lines sorted by frequency descending. They run:

grep 'fail' log.txt | sort | uniq -c | sort -nr > result.txt

But the output file result.txt is empty. What is the most likely reason?
Linux CLI
grep 'fail' log.txt | sort | uniq -c | sort -nr > result.txt
AThe uniq command requires input to be unsorted to count correctly.
BThe grep command is case-sensitive and misses lines with 'Fail' or 'FAIL'.
CThe sort -nr command sorts alphabetically, not numerically.
DThe output redirection operator '>' is incorrect and should be '>>'.
Attempts:
2 left
💡 Hint
Check if the grep command matches all case variations of 'fail'.
📝 Syntax
advanced
1:30remaining
Which command correctly extracts the third column from a CSV file?
Given a CSV file data.csv with comma-separated values, which command correctly extracts the third column?
Acut -d ';' -f 3 data.csv
Bcut -f 3 data.csv
Ccut -d ',' -f 3 data.csv
Dcut -c 3 data.csv
Attempts:
2 left
💡 Hint
Remember to specify the correct delimiter for CSV files.
🚀 Application
expert
3:00remaining
Create a one-liner to find the top 3 most common words in a text file
Which of the following Linux command pipelines correctly finds the top 3 most common words in file.txt, ignoring case and punctuation?
Atr -cs '[:alpha:]' '\n' < file.txt | tr 'A-Z' 'a-z' | sort | uniq -c | sort -nr | head -3
Bcat file.txt | grep -o '\w+' | sort | uniq -c | sort -nr | head -3
Csed 's/[^a-zA-Z]/ /g' file.txt | tr 'a-z' 'A-Z' | sort | uniq -c | sort -nr | head -3
Dawk '{for(i=1;i<=NF;i++) print $i}' file.txt | sort | uniq -c | sort -nr | head -3
Attempts:
2 left
💡 Hint
Consider how to split words, normalize case, and count frequencies.