11 Most Practical Data Science Skills for 2022

While the field of data science continues to evolve with exciting new progress in analytical approaches and machine learning, there remain a core set of skills that are foundational for all general practitioners and specialists, especially those who want to be employable with full-stack capabilities.



Many “How to Data Science” courses and articles, including my own, tend to highlight fundamental skills like Statistics, Math, and Programming. Recently, however, I noticed through my own experiences that these fundamental skills can be hard to translate into practical skills that will make you employable.

Therefore, I wanted to create a unique list of practical skills that will make you employable.

The first four skills that I talk about are absolutely pivotal for any data scientist, regardless of what you specialize in. The following skills (5–11) are all important skills but will vary in usage depending on what you specialize in.

For example, if you’re most statistically grounded, you might spend more time on inferential statistics. Conversely, if you’re more interested in text analytics, you might spend more time learning NLP, or if you’re interested in decision science, you might focus on explanatory modeling. You get the point.

With that said, let’s dive into what I believe are the 11 most practical data science skills:

 

1. Writing SQL Queries & Building Data Pipelines

 

Learning how to write robust SQL queries and scheduling them on a workflow management platform like Airflow will make you extremely desirable as a data scientist, hence why it’s point #1.

Why? There are many reasons:

  1. Flexibility: companies like data scientists who can do more than just model data. Companies LOVE full-stack data scientists. If you’re able to step in and help build core data pipelines, you’ll be able to improve the insights that are gathered, build stronger reports, and ultimately make everyone’s lives easier.
  2. Independence: there will be instances where you need a table or view for a model or a data science project that does not exist. Being able to write robust pipelines for your projects instead of relying on data analysts or data engineers will save you time and make you more valuable.

Therefore, you MUST be an expert at SQL as a data scientist. There are no exceptions.

Resources

 

2. Data Wrangling / Feature Engineering

 

Whether you’re building models, exploring new features to build, or performing deep dives, you’ll need to know how to wrangle data.

Data Wrangling means transforming your data from one format to another.

Feature Engineering is a form of data wrangling but specifically refers to extracting features from raw data.

It doesn’t necessarily matter how you manipulate your data, whether you use Python or SQL, but you should be able to manipulate your data however you like (within the parameters of what is possible, of course).

Resources

 

3. Version Control / GitHub

 

When I say “version control,” I’m specifically referring to GitHub and Git. Git is the main version control system used in the world, and GitHub is essentially a cloud-based repository for files and folders.

While Git is not the most intuitive skill to learn at first, it’s essential to know for almost every single coding-related role. Why?

  • It allows you to collaborate and work on projects in parallel with others
  • It keeps track of all versions of your code (in case you need to revert to older versions)

Take the time to learn Git. It will take you far!

 

4. Storytelling (i.e., Communication)

 

It’s one thing to build a visually stunning dashboard or an intricate model with over 95% accuracy. BUT if you can’t communicate the value of your projects to others, you won’t get the recognition that you deserve, and ultimately, you won’t be as successful in your career as you should.

Storytelling refers to “how” you communicate your insights and models. Conceptually, if you were to think about a picture book, the insights/models are the pictures and the “storytelling” refers to the narrative that connects all of the pictures.

Storytelling and communication are severely undervalued skills in the tech world. From what I’ve seen in my career, this skill is what separates juniors from seniors and managers.

 

5. Regression/Classification

 

Building regression and classification models, i.e., predictive models, are not something that you’ll always be working on, but it’s something that employers will expect you to know if you’re a data scientist.

Even if it’s not something that you’ll often do, it’s something that you have to be good at because you want to be able to build high-performing models. To give some perspective, in my career so far, I’ve only productionalized TWO machine learning models, but they were mission-critical models that had a significant impact on the business.

Therefore, you should have a good understanding of data preparation techniques, boosted algorithms, hyperparameter tuning, and model evaluation metrics.

Resources

 

6. Explainable AI / Explainable Machine Learning

 

Many machine learning algorithms were considered “black boxes” for a long time because it wasn’t clear how these models derived their predictions based on their respective inputs. That’s now changing due to the widespread adoption of explainable machine learning techniques, like SHAP and LIME.

SHAP and LIME are two techniques that tell you not only the feature importance for each feature but also the impact on the model output, similar to the coefficients in a linear regression equation.

With SHAP and LIME, you can create explanatory models and also better communicate the logic behind your predictive models too.

Resources

 

7. A/B Testing (Experimentation)

 

A/B testing is a form of experimentation where you compare two different groups to see which performs better based on a given metric.

A/B testing is arguably the most practical and widely-used statistical concept in the corporate world. Why? A/B testing allows you to compound 100s or 1000s of small improvements, resulting in significant changes and improvements over time.

If you’re interested in the statistical aspect of data science, A/B testing is essential to understand and learn.

Resource

 

8. Clustering

 

Personally, I haven’t had to use clustering in my career, but it’s a core area of data science that everyone should at least be familiar with.

Clustering is useful for a number of reasons. You can find different customer segments, you can use clustering to label unlabeled data, and you can even use clustering to find cutoff points for models.

Below are some resources that go over the most important clustering techniques that you should know.

Resources

 

9. Recommendation Systems

 

While I haven’t had to build a recommendation system in my life (yet), it’s one of the most practical applications in data science. Recommendation systems are so powerful because they have the ability to propel revenue and profits. In fact, Amazon claimed to have boosted their sales by 29% due to their recommendation systems in 2019.

And so, if you ever work for a company in which its users have to make choices, and there are a lot of options to choose from, recommendation systems might be a useful application to explore.

 

10. NLP

 

NLP, or Natural Language Processing, is a branch of AI that focuses on text and speech. Unlike machine learning, I’d say that NLP is far from maturing, which is what makes it so interesting.

NLP has a lot of use-cases…

  • It can be used for sentiment analysis to see how people feel about a business or a business’ product(s).
  • It can be used to monitor a company’s social media by separating positive and negative comments.
  • NLP is the core behind building chatbots and virtual assistants
  • NLP is also used for text extraction (sifting through documents)

Overall, NLP is a really interesting and useful niche in the data science world.

Resource

 

11. Metric Development

 

More recently, data scientists have adopted the responsibility of metric development because surfacing metrics depend on 1) data to calculate the metric and 2) code to calculate and output the metric.

Metric development involves several things:

  1. It involves picking the right metric that a team or department should use to help them monitor their goals.
  2. It involves clarifying and establishing any assumptions that need to be made for the metrics to hold.
  3. It involves developing the metric, coding it, and building a pipeline to monitor it on a periodic basis.

 

I hope that this helps guide your learnings and gives you some direction for the upcoming year. There is a lot to learn, so I would definitely choose a couple of skills that sound most interesting to you and go from there.

Do keep in mind that this is more of an opinionated article that is backed by anecdotal experience, so take what you want from this article. But as always, I wish you the best in your learning endeavors!

 

Related: