KDnuggets Home » News » 2014 » Mar » Meetings & Reports » Sentiment Analysis Symposium Highlights ( 14:n06 )

Sentiment Analysis Symposium Highlights

Highlights from Sentiment Analysis Symposium held recently in New York: Affective Computing: sentiment from facial expressions, need to market to a “tribe” of people, social media speed of appearance, IBM notion of an Engaged Employee, and more.

By Steve Gallant, March 10, 2014.

Here are some personal impressions on Sentiment Analysis Symposium (New York, March 6, 2014). It was a well-attended conference, which was organized by Seth Grimes (Alta Plana). For a more complete list of speakers, slides, videos, and other material, please visit http://sentimentsymposium.com/.

Sentiment Symposium 2014

Prof. Rosalind Picard Prof. Rosalind Picard (MIT Media Lab) is working on “affective computing,” trying to detect sentiment from facial expressions. An interesting application is helping autistic kids sense sentiment in others’ facial expressions, as well as providing feedback on their own expressions. Her group has developed software to measure smile intensity, and has a database of a billion(!) smile classified facial views (internationally).  She commented that it was important to look at dynamic facial images for best performance, not just static images.  Their software can differentiate “delight smiles” from “frustration smiles”.  She also noted that measuring skin conductance on the wrist is a better predictor of memory consolidation during sleep than EEG signals.  Prof. Picard helped found Affectiva, which is used by 300 brands to measure sentiment.

David Rabjohns  (MotiveQuest) gave a marketing perspective, emphasizing the need to market to a “tribe” of people (possibly identified through sentiment on social media).  For example, Toyota Prius customers were found to care about the environment, rather than the car or saving money.  Toyota emphasized this, and interacted with environmental leaders, flying them around for consulting and conferences.  Prius outperformed Honda’s hybrid which didn’t take this approach.  Successful brands transcend the product.  For example, Nike celebrates people/health/athletics. MotiveQuest maps out motivational areas using 12 scales:  “feel accomplished,” “feel savvy,” “feel sensual,” etc.  Other examples include the campaign for Ram trucks (“feel important, rebellious”) and Greek Yogurt (“feel accomplished, pure, nurturing”).  In summary, think “what motivates the tribe,” not “how can I sell things.”

Dr. Scott Hendrickson’s company, GNIP, provides real-time data from multiple social data sources.  He pointed out that different social media have a different speed of appearance for items, as well a different depth.  For example, items appear first on twitter, but they are quite shallow.  YouTube is at the other extreme.

Marie Wallace described IBM’s notion of an Engaged Employee.  IBM focuses primarily on metadata, not text, to derive an “Enterprise Graph”, which becomes useful for recommending the sales team for an engagement, for employee retention analysis, and for constructing an individual engagement dashboard.

Prof. V.S. Subrahmanian (Univ. of Maryland) described using 31 sentiment and other entities and diffusion models to predict the coming election in India.  Predictions:  Modi will become Prime Minister, and the BJP party will be short of getting a majority but will lead a coalition government.  These predictions are consistent with recent polls in India.

Sarah Biller (Capital Market Exchange) computes sentiment for institutional investors.  They employ Bayesian models to derive a sentiment adjusted price for a stock or bond, analyzing 140 bonds per month. Key influencers are portfolio managers, key strategists, and similar. They use a curated selection of the business press.

Maggie Xiong described how The Huffington Post used modeling (Support Vector, Bayes, boosting, logistic regression) to model which comments to exclude because they don’t meet standards.  The resulting system handles about 2/3 of the comments automatically, leaving 1/3 for human decisions.

Shree Dandekar described Dell’s successful uses of sentiment.  They monitor 25k – 30k conversations per day, looking at sentiment, gravity (depth of sentiment), domain influence (eg., WSJ), author credibility, and relevance.  He gave as a case study the launch of their XPS13 computer, which had a dip in sales several months after launching.  However, the sentiment toward the product was good, so they avoided slashing the price.  Sure enough, sales bounced back, and in retrospect the dip was attributed to seasonality factors.  They also use sentiment for M&A when it comes to acquiring companies.

Prof. Stephen Pulman (Oxford) gave an instructive overview of recent cutting-edge research on “deep learning” with neural networks.

Aloke Guha (Cruxly) noted that a bag-or-words model (ie., which words, but not their order) was insufficient for sentiment.  Grammar must also be taken into account, although difficult.  Additionally, verb classifications are needed for determining intent, which they capture with a couple hundred grammar rules.  They track sentiment for “buy”, “like”, and 5 others.  They use this sentiment to create lead lists for sales.

Steve Gallant (sgallant@mmres.com) is VP for Research at MultiModel Research, a company involved with text and machine learning for sentiment analysis, predictions, and advanced (parse-aware) search.