Command Line Tricks For Data Scientists
Aspiring to master the command line should be on every developer’s list, especially data scientists. Learning the ins and outs of your terminal will undeniably make you more productive.
By Kade Killary, Data Scientist & Engineer
For many data scientists, data manipulation begins and ends with Pandas or the Tidyverse. In theory, there is nothing wrong with this notion. It is, after all, why these tools exist in the first place. Yet, these options can often be overkill for simple tasks like delimiter conversion.
Aspiring to master the command line should be on every developer’s list, especially data scientists. Learning the ins and outs of your terminal will undeniably make you more productive. Beyond that, the command line serves as a great history lesson in computing. For instance, awk — a data-driven scripting language. Awk first appeared in 1977 with the help of Brian Kernighan, the K in the legendary K&R book. Today, some near 50 years later, awk remains relevant with new books still appearing every year! Thus, it’s safe to assume that an investment in command line wizardry won’t depreciate any time soon.
What We’ll Cover
- SORT & UNIQ
File encodings can be tricky. For the most part files these days are all UTF-8 encoded. To understand some of the magic behind UTF-8, check out this excellent video. Nonetheless, there are times where we receive a file that isn’t in this format. This can lead to some wonky attempts at swapping the encoding schema. Here,
iconv is a life saver. Iconv is a simple program that will take text in one encoding and output the text in another.
iconv -llist all known encodings
iconv -csilently discard characters that cannot be converted
If you are a frequent Pandas user then
head will be familiar. Often when dealing with new data the first thing we want to do is get a sense of what exists. This leads to firing up Pandas, reading in the data and then calling
df.head() - strenuous, to say the least. Head, without any flags, will print out the first 10 lines of a file. The true power of
head lies in testing out cleaning operations. For instance, if we wanted to change the delimiter of a file from commas to pipes. One quick test would be:
head mydata.csv | sed 's/,/|/g'.
head -nprint a specific number of lines
head -cprint a specific number of bytes
Tr is analogous to translate. This powerful utility is a workhorse for basic file cleaning. An ideal use case is for swapping out the delimiters within a file.
Another feature of
tr is all the built in
[:class:] variables at your disposal. These include:
You can chain a variety of these together to compose powerful programs. The following is a basic word count program you could use to check your READMEs for overuse.
Another example using basic regex:
tr -ddelete characters
tr -ssqueeze characters
\NNNcharacter with octal value NNN
Word count. Its value is primarily derived from the
-l flag, which will give you the line count.
This tool comes in handy to confirm the output of various commands. So, if we were to convert the delimiters within a file and then run
wc -l we would expect the total lines to be the same. If not, then we know something went wrong.
wc -cprint the byte counts
wc -mprint the character counts
wc -Lprint length of longest line
wc -wprint word counts
File sizes can range dramatically. And depending on the job, it could be beneficial to split up the file — thus
split. The basic syntax for split is:
Two quirks are the naming convention and lack of file extensions. The suffix convention can be numeric via the
-d flag. To add file extensions, you’ll need to run the following
find command. It will change the names of ALL files within the current directory by appending
.csv, so be careful.
split -bsplit by certain byte size
split -agenerate suffixes of length N
split -xsplit using hex suffixes
SORT & UNIQ
The preceding commands are obvious: they do what they say they do. These two provide the most punch in tandem (i.e. unique word counts). This is due to
uniq, which only operates on duplicate adjacent lines. Thus, the reason to
sort before piping the output through. One interesting note is that
sort -uwill achieve the same results as the typical
sort file.txt | uniq pattern.
Sort does have a sneakily useful ability for data scientists: the ability to sort an entire CSV based on a particular column.
-t option here is to specify the comma as our delimiter. More often than not spaces or tabs are assumed. Furthermore, the
-k flag is for specifying our key.
sort -fignore case
sort -rreverse sort order
sort -Rscramble order
uniq -ccount number of occurrences
uniq -donly print duplicate lines
Cut is for removing columns. To illustrate, if we only wanted the first and third columns.
To select every column other than the first.
In combination with other commands,
cut serves as a filter.
Finding out the number of unique values within the second column.
Paste is a niche command with an interesting function. If you have two files that you need merged, and they are already sorted,
paste has you covered.
For a more SQL-esque variant, see below.
Join is a simplistic, quasi-tangential, SQL. The largest differences being that
joinwill return all columns and matches can only be on one field. By default,
join will try and use the first column as the match key. For a different result, the following syntax is necessary:
The standard join is an inner join. However, an outer join is also viable through the
-a flag. Another noteworthy quirk is the
-e flag, which can be used to substitute a value if a missing field is found.
Not the most user-friendly command, but desperate times, desperate measures.
join -aprint unpairable lines
join -ereplace missing input fields
join -jequivalent to
-1 FIELD -2 FIELD
Global search for a regular expression and print, or
grep; likely, the most well known command, and with good reason. Grep has a lot of power, especially for finding your way around large codebases. Within the realm of data science, it acts as a refining mechanism for other commands. Although its standard usage is valuable as well.
Count total number of lines containing word / pattern.
Grep for multiple values using the or operator —
alias grep="grep --color=auto"make grep colorful
grep -Euse extended regexps
grep -wonly match whole words
grep -lprint name of files with match
grep -vinverted matching
THE BIG GUNS
Sed and Awk are the two most powerful commands in this article. For brevity, I’m not going to go into exhausting detail about either. Instead, I will cover a variety of commands that prove their impressive might. If you want to know more, there is a book just for that.
At its core
sed is a stream editor that operates on a line-by-line basis. It excels at substitutions, but can also be leveraged for all out refactoring.
The most basic
sed command consists of
s/old/new/g. This translates to search for old value, replace all occurences in-line with new. Without the
/gour command would terminate after the first occurrence on the line.
To get a quick taste of the power lets dive into an example. In this scenario you’ve been given the following file:
The first thing we may want to do is remove the dollar signs. The
-i flag indicates in-place. The
'' is to indicate a zero-length file extension, thus overwriting our initial file. Ideally, you would test each of these individually and then output to a new file.
Next up, the commas in our
balance column values.
Lastly, Jack up and decided to quit one day. So, au revoir, mon ami.
As you can see,
sed packs quite a punch, but the fun doesn’t stop there.
The best for last. Awk is much more than a simple command: it is a full-blown language. Of everything covered in this article,
awk is by far the coolest. If you find yourself impressed there are loads of great resources - see here, hereand here.
Common use cases for
- Text processing
- Formatted text reports
- Performing arithmetic operations
- Performing string operations
Awk can parallel
grep in its most nascent form.
Or with a little more magic the combination of
awkprints the third and fourth column, tab separated, for all lines with our word.
-F, merely changes our delimiter to a comma.
Awk comes with a lot of nifty variables built-in. For instance,
NF - number of fields - and
NR - number of records. To get the fifty-third record in a file:
An added wrinkle is the ability to filter based off of one or more values. The first example, below, will print the line number and columns for records where the first column equals string.
Multiple numerical expressions:
Sum the third column:
The sum of the third column, for values where the first column equals “something”.
Get the dimensions of a file:
Print lines appearing twice:
Remove duplicate lines:
Substitute multiple values using built-in function
awk command will combine multiple CSV files, ignoring the header and then append it at the end.
Need to downsize a massive file? Welp,
awk can handle that with help from
sed. Specifically, this command breaks one big file into multiple smaller ones based on a line count. This one-liner will also add an extension.
The command line boasts endless power. The commands covered in this article are enough to elevate you from zero to hero in no time. Beyond those covered, there are many utilities to consider for daily data operations. Csvkit, xsv and q are three of note. If you’re looking to take an even deeper dive into command line data science, then look no further than this book. It’s also available online for free!
- Sed One-Liners
- Awk One-Liners
- Working With CSVs on the Command Line
- Appending file extensions
- My Best AWK Tricks
- Bioinformatics one-liners
- The UNIX School
- Sed — An Intro and Tutorial
Bio: Kade Killary is a Data Scientist & Engineer at XMedia.
Original. Reposted with permission.
- Top 12 Essential Command Line Tools for Data Scientists
- Data Science at the Command Line: Exploring Data
- 7 Steps to Mastering Data Preparation with Python