KDnuggets Home » News » 2017 » Aug » Opinions, Interviews » O’Reilly NYC AI Conference Highlights: Explainable AI, Vector Representation, Bias, and Future ( 17:n32 )

O’Reilly NYC AI Conference Highlights: Explainable AI, Vector Representation, Bias, and Future


The answer to questions of trust and bias in AI is largely seen in the focus on Explainable AI. Although traditionally viewed as "black boxes", AI and machine learning systems are not ontologically inscrutable.



By Joe Duncan, Tweepsmap.

INTRO

I was lucky enough to win a free pass to the OReilly AI NYC 2017 Jun 26-29O'Reilly AI conference in NYC, at the end of June 2017, courtesy of KDnuggets. I've had a life-long passion for AI, so I was quite excited to go see the best and the brightest in the industry and hear what they have to say! Working at Tweepsmap, I'm also interested in what new developments in big data and natural language processing (NLP) could do to improve our analytics.

The conference was held at a downtown Hilton in NYC, but it wasn't large as conferences go, only a couple of thousand people. Would have been a great venue if it weren't for the $14.83, 16oz pints at the hotel bar! I only had one, but the free beer nuts were a nice touch. The venue was well chosen, it was not too crowded or too spread out (I've been to conferences where there was a 15min walk between different talks!) and the staff were very friendly and helpful. There was, however, a distinct lack of seating outside the lecture halls - resulting in the hallways being filled with people sitting on the floor between talks. I'm not sure if this was a NYC thing or an O'Reilly AI thing - I noticed a distinct absence of ANY kind of seating available in most public spaces in NYC - but it was definitely an annoying thing, how about some chairs?

The Quebec city venue for AAAI 2014 managed to provide ample seating outside of the lecture halls so that no one had to sit on the floor between talks, O'Reilly AI can do better. One thing about the conference organization that I was quite impressed with was the availability of power outlets. Power cords had been threaded under the chairs in the lecture halls, and there was an abundant supply of multi-outlet power bars within easy reach no matter where you sat. It was the first conference I have been to where I didn't have to fight for a spot near the lone power outlet at the back of the hall.

OVERVIEW

  • industry conference, more about practical solutions than academic esoteria
  • big sponsors: IBM, Intel, Nvidia, Google, O'Reilly (of course)
  • themes etc ...
The main overarching themes of the conference were deep learning and big data - and how they can improve our lives; from medical diagnosis and self driving cars, to tackling fake news. Deep learning and big data were mentioned in just about every talk I saw. Aside from those themes there were recurring subthemes about explainable AI, vector representations, big data bias & GIGO; and a general lament about the lay public's understanding of what AI is, being too frequently confused with Artificial General Intelligence (AGI) and necessitating that researchers refer to their work instead as machine learning to be properly understood.

There were plenty of talks at the conference, since I couldn't see them all, I tried to focus on the ones having to do with text mining and natural language processing (NLP), as those were most relevant to my current work.There were a couple of standouts.

Explainable AI


Explainable AI is a hot new topic in the field, with the goal of having AI not only provide answers and solve problems, but to also provide a model of the reasoning used. Most machine learning systems (e.g. artificial neural networks, support vector machines etc...) have been traditionally viewed as a "black box" - training and input go in one end and answers/solutions come out the other. Researchers weren't really interested in HOW the black boxes got their answers, only that they could. However, now that AI and machine learning systems are becoming more mainstream, people are (rightly) concerned about whether or not these systems can be trusted to make important decisions. Explainable AI, by providing a model of the explanation in tandem with the desired output, is seen as a way to address this issue - if we know how the system is making it's decisions we can be more confident in the output. Not surprisingly, Explainable AI came up a lot at the conference.

Beyond the state of the art in reading comprehension


In an excellent talk called "Beyond the state of the art in reading comprehension", Jennifer Chu-Carroll, from Elemental Cognition (and who previously helped develop IBM's Watson) quoted Einstein:
"If you can't explain it simply, you don't understand it well enough"


and discussed how to improve natural language understanding by extending the "Question & Answer" paradigm currently used as a metric by having the questions used require correlation with information outside of the text itself, and by requiring the answers to include a model of the reasoning used to reach the answer.

However most of the common datasets and tests (such as MCTest and SQuAD) used to train and compare NLP systems for understanding, have questions where the answers are no more than paraphrasing words from the story in correct combination: mere syntactic manipulation. Chu-Carroll termed this set of problems "AI-easy" and contrasted them with "AI-hard" NLP understanding problems, where the answers involved extrapolating from pre-existing knowledge outside of the story. According to her, a better way to test such capacity is to require that the system not only produce the answer to the question, but also an explanation of the reasoning behind it. Some systems turn out to have surprisingly off-kilter reasoning even for their correct answers!

"AI-easy" is something like:

"Bob is driving a car. Bob drops Alice off at school."
Q: "Where did Bob drive to?"
A: "School"

This is simply a rearrangement of symbols present in the text.

"AI-hard" is something like:

"Alice is driving a boat. Bob is driving a car."
Q: "Who needs a life jacket?"
A: "Alice"

This requires access to external knowledge that: boats run on water, and being near water requires a life jacket.

Elemental Cognition is working on solving this problem by combining statistical knowledge and symbolic vector representations in a system with access to large datasets it can query in a "dialogue" with itself, generating questions and answers to fill the gaps in it's knowledge.

Links:

Teaching Machines to Reason and Comprehend


deep_learningRuss Salakhutdinov from Carnegie Mellon gave a talk called "Teaching Machines to Reason and Comprehend" where he described how the multi-layer hierarchical feature representations used in deep learning ANNs allow us to actually inspect the "visual attention" of the model by inspecting the feature activations at different layers. Since we know what the features are, we can decode them and present them - indicating visually which pixels the system is weighting the most, what it is "paying attention to". Nvidia has recently done something very similar in order to inspect the processing of its self driving car software, which you can see here (https://www.technologyreview.com/s/604324/nvidia-lets-you-peer-inside-the-black-box-of-its-self-driving-ai/). They were showing this video (or a similar one) at their conference booth.

Programming your way to explainable AI


Mark Hammond from Bonsai opened his talk "Programming your way to explainable AI" with this quote:


"No one really knows how the most advanced algorithms do what they do. That could be a problem." Will Knight - The Dark Secret at the Heart of AI


... before immediately stating it was untrue - most researchers have a pretty good understanding of how their systems work, but making it accessible had never been a priority. He essentially agreed with Chu-Carroll that expert systems and other AI models must be able to provide not only an answer or solution, but also an explanation in order for us to have full confidence in their performance. He discussed the open questions of Explainable AI: "What is the appropriate level of abstraction?" (for the explanation) and "How do we get there?".

In his view the level of abstraction required is that of "justification" - providing an acceptable reasoning model for the system's predictions.

He contrasted this with "introspection" - exposure of the system's internal representations and examining how they are applied. Hammond's proposal for how to get there was straight forward - decompose the problem into sub problems, divide the input space into intelligible subsets of classifiable inputs. He provided an example of an ANN system learning to play a variation of the game "Lunar Lander" where the goal is to land a moon lander (in 2D) into a crater from a random starting point at the top of the screen using three thrusters and Newtonian physics.

The "non-Explainable" version simply represents the whole system as a single ANN, and is quite capable of learning to play the game, but doesn't give us any idea how it has done so. The "Explainable-AI" version used a separate ANN for each individual thruster, each with their own training independent of the others. By breaking the system up into separate sub-behaviours, the behaviour of the whole was more interpretable.

Vector Representations


Another theme that came up, at a lower level of abstraction, was that of vector representations: rather than representing data, inputs and outputs as abstract symbols and the relationships between them, they are represented as vectors, essentially arrays of numeric values.

Adding meaning to NLP


Jonathan Mugan from DeepGrammar (deepgrammar.com/) gave a talk called "Adding meaning to NLP" where he discussed the role of vector representations in improving the performance of natural language processing (NLP) systems. In contrast to systems like latent dirichlet allocation (LDA) - a symbolic system that represents meaning as statistical relations between symbols: a "meaningless bag-of-words", sub-symbolic systems using vector representations, such as Word2vec and Seq2Seq (both by Google), allow for word vectors to be meaningfully combined mathematically into vectors representing whole phrases and word sequences - vectors which can then be "unwound" in a similar fashion to retrieve previous words in a sequence, or sub-meanings of a phrase.

Chu-Carroll mentioned in her talk that vector representations were the new approach to the lower level implementation of NLP systems, and that they were making gains in performance as a result.

AI in Enterprise Software


At the SAP talk "AI in Enterprise Software" by Eric Marcade, they presented a long list of AI hype buzz-words as features of their AI-as-service platform - including "vector representations" as one of the features offered in their "one-size-fits-all" AI services for unstructured data (conspicuously absent was any mention of "deep learning" however).

Conversational AI


In a fascinating talk "Conversational AI", given by Yishay Carmiel of Spoken Communications (www.spoken.com/), he discussed how their systems perform voice recognition on massive amounts of archived and live customer service calls. By using parallel deep learning algorithms and "i-vectors", they were able to achieve what he called "additive performance" and drastically improve the performance of their systems. "I-vector" frameworks take utterances (verbal audio snippets) coded in what is called Total Factor Space (https://dsp.stackexchange.com/questions/38689/what-do-we-mean-by-total-factor-space-in-audio-processing) and represents them as low-dimension vectors.

Salakhutdinov, in his talk, discussed how, by using vector representations for meaning in visual classification systems, it's possible to add or subtract meaning to generate a new meaning. For example, if you have a vector representation for "cat in a box" in a visual search algorithm that retrieves pictures of cats in boxes, you could have the same system retrieve just pictures of cats by subtracting the vector representation of "box" from the vector representation of "cat in a box".

GIGO & Bias


The last major sub-theme I noticed had to do with "garbage-in, garbage-out" (GIGO) and how it can result in AI systems making biased decisions or judgments. The basic idea is that by training our AI systems on data sets which are either biased or were gathered in a biased manner, the eventual outputs of that system will reflect the bias present in the data.

Planning for the social impact of AI


Madeleine Elish from Data & Society (https://datasociety.net/) directly addressed this issue in her talk "Planning for the social impact of AI". She emphasized how AI is only as good as its data, and how "big data" is the new AI - many of the recent improvements in machine learning have come simply from being able to crunch huge amounts of data. Reliance on big data means that such systems become vulnerable to any bias present in the data - because the data sets are so large it can be difficult to detect biases just by inspecting the data prior to training. Elish gave an example of how a system designed to assess criminality from personal information could incorrectly begin to associate African-American names with higher levels of criminality if the data it trained on comes from a region where African-American people were disproportionately targeted for arrest in the first place. She claimed the problem is compounded by people's unrealistic expectations - popular misconceptions about AI cause the general public to believe that AI systems are more capable, more objective and less fallible than they actually are.

Chu-Carroll also discussed bias caused by GIGO in her talk, stating how it could be a big problem for NLP systems, leading them to make false inferences. For instance inferring that George Washington was a "good" president because the Washington monument is tall and skinny, and tall and skinny are both associated with "good". Its especially a problem for NLP systems trained on what Chu-Carroll called "AI-easy" data sets - data sets consisting of short stories and associated reading comprehension questions and answers where the answers to the questions can be generated by simply paraphrasing part of the text itself, i.e. by syntactic manipulation alone.

The solution to the problem in her view is to use what she called "AI-hard" data sets when training NLP systems - data sets where the answers to the questions require making inferences using knowledge from outside the text itself - and in requiring that such systems also produce a description of the reasoning involved (so that false inferences may be more easily detected). This is essentially the same answer to GIGO & bias in AI given by Mark Hammond in his talk: that by building Explainable AI, any bias caused by GIGO will become immediately obvious when the systems internal reasoning is made clearly understandable.

Conclusion


With the advent of deep learning and big data, the AI field has been making tremendous progress recently, and as a consequence has garnered more public attention. With this increased scrutiny has come questions about whether we are trusting these systems too much, and how we can ensure confidence in the decisions they make. Big data has enabled AI to handle tough problems that have traditionally stymied researchers, however the use of such massive data sets comes at a cost: it becomes more difficult to see biases in the data, which can cause systems to make discriminatory decisions.

The answer to these questions of trust and bias is largely seen in the focus on Explainable AI. Although traditionally viewed as "black boxes", AI and machine learning systems are not ontologically inscrutable. The idea that their inner workings cannot be understood or explained is merely an epistemological artifact of the prior lack of impetus for researchers to provide them - simply getting results has been more important. However, now that questions are being raised about trust and bias in big data and AI, researchers are beginning to focus on extracting details of the inner workings of AI and machine learning systems - and they're having a lot of success! It turns out that such systems are NOT inscrutable black boxes after all, researchers can (and have) exposed their inner processing in ways that allow humans to understand how they arrive at their conclusions.

Other highlights:
  • David Wolpert TANSTAAFL, Explainable AI and deep learning connection
  • conference party?
This conference has short history but poised to be one of premiere industry conferences in field.
Bio: Joe Duncan is a software developer by trade, cognitive scientist by education. Interested in the intersection of AI and empirical psychology. He works at Tweepsmap in Southern Ontario-ish, Canada.

Related: