Mining Twitter Data with Python Part 6: Sentiment Analysis Basics
Part 6 of this series builds on the previous installments by exploring the basics of sentiment analysis on Twitter data.
By Marco Bonzanini, Independent Data Science Consultant.
Sentiment Analysis is one of the interesting applications of text analytics. Although the term is often associated with sentiment classification of documents, broadly speaking it refers to the use of text analytics approaches applied to the set of problems related to identifying and extracting subjective material in text sources.
This article continues the series on mining Twitter data with Python, describing a simple approach for Sentiment Analysis and applying it to the rubgy data set (see Part 4).
A Simple Approach for Sentiment Analysis
The technique we’re discussing in this post has been elaborated from the traditional approach proposed by Peter Turney in his paper Thumbs Up or Thumbs Down? Semantic Orientation Applied to Unsupervised Classification of Reviews. A lot of work has been done in Sentiment Analysis since then, but the approach has still an interesting educational value. In particular, it is intuitive, simple to understand and to test, and most of all unsupervised, so it doesn’t require any labelled data for training.
Firstly, we define the Semantic Orientation (SO) of a word as the difference between its associations with positive and negative words. In practice, we want to calculate “how close” a word is with terms like goodand bad. The chosen measure of “closeness” is Pointwise Mutual Information (PMI), calculated as follows (t1 and t2 are terms):
In Turney’s paper, the SO of a word was calculated against excellent andpoor, but of course we can extend the vocabulary of positive and negative terms. Using and a vocabulary of positive terms and for the negative ones, the Semantic Orientation of a term t is hence defined as:
We can build our own list of positive and negative terms, or we can use one of the many resources available on-line, for example the opinion lexicon by Bing Liu.
Computing Term Probabilities
In order to compute (the probability of observing the term t) and (the probability of observing the terms t1 and t2 occurring together) we can re-use some previous code to calculate term frequencies and term co-occurrences. Given the set of documents (tweets) D, we define the Document Frequency (DF) of a term as the number of documents where the term occurs. The same definition can be applied to co-occurrent terms. Hence, we can define our probabilities as:
In the previous articles, the document frequency for single terms was stored in the dictionaries count_single and count_stop_single (the latter doesn’t store stop-words), while the document frequency for the co-occurrencies was stored in the co-occurrence matrix com
This is how we can compute the probabilities:
Computing the Semantic Orientation
Given two vocabularies for positive and negative terms:
We can compute the PMI of each pair of terms, and then compute the Semantic Orientation as described above:
The Semantic Orientation of a term will have a positive (negative) value if the term is often associated with terms in the positive (negative) vocabulary. The value will be zero for neutral terms, e.g. the PMI’s for positive and negative balance out, or more likely a term is never observed together with other terms in the positive/negative vocabularies.
We can print out the semantic orientation for some terms:
The PMI-based approach has been introduced as simple and intuitive, but of course it has some limitations. The semantic scores are calculated on terms, meaning that there is no notion of “entity” or “concept” or “event”. For example, it would be nice to aggregate and normalise all the references to the team names, e.g. #ita, Italy and Italia should all contribute to the semantic orientation of the same entity. Moreover, do the opinions on the individual teams also contribute to the overall opinion on a match?
Some aspects of natural language are also not captured by this approach, more notably modifiers and negation: how do we deal with phrases likenot bad (this is the opposite of just bad) or very good (this is stronger than just good)?
This article has continued the tutorial on mining Twitter data with Python introducing a simple approach for Sentiment Analysis, based on the computation of a semantic orientation score which tells us whether a term is more closely related to a positive or negative vocabulary. The intuition behind this approach is fairly simple, and it can be implemented using Pointwise Mutual Information as a measure of association. The approach has of course some limitations, but it’s a good starting point to get familiar with Sentiment Analysis.
Bio: Marco Bonzanini is a Data Scientist based in London, UK. Active in the PyData community, he enjoys working in text analytics and data mining applications. He's the author of "Mastering Social Media Mining with Python" (Packt Publishing, July 2016).
Original. Reposted with permission.
- Mining Twitter Data with Python Part 3: Term Frequencies
- Mining Twitter Data with Python Part 4: Rugby and Term Co-occurrences
- Mining Twitter Data with Python Part 5: Data Visualisation Basics
- Back To Basics, Part Dos: Gradient Descent
- How To Collect Data For Customer Sentiment Analysis
- Sentiment Analysis on Encrypted Data with Homomorphic Encryption
- Sentiment Analysis with KNIME
- Sentiment Analysis API vs Custom Text Classification: Which one to choose?
- How to Create and Deploy a Simple Sentiment Analysis App via API