How to Make AI More Accessible
I recently was a guest speaker at the Stanford AI Salon on the topic of accessiblity in AI, which included a free-ranging discussion among assembled members of the Stanford AI Lab. There were a number of interesting questions and topics, so I thought I would share a few of my answers here.
By Rachel Thomas, Co-founder at fast.ai
Q: What 3 things would you most like the general public to know about AI?
- AI is easier to use than the hype would lead you to believe. In my recent talk at the MIT Technology Review conference, I debunked several common myths that you must have a PhD, a giant data set, or expensive computational power to use AI.
- Most AI researchers are not working on getting computers to achieve human consciousness. Artificial intelligence is an incredibly broad field, and a somewhat misleading name for a lot of the stuff included in the field (although since this is the terminology everyone uses, I go along with it). Last week I was 20 minutes into a conversation with an NPR reporter before I realized that he thought we were talking about computer systems achieving human consciousness, and I thought we were talking about the algorithm Facebook uses to decide which advertisements to show you. Things like this happen all too often.
95% of the time when people within AI talk about AI, they are referring to algorithms that do a specific task (e.g. that sort photos, translate language, or win Atari games) and 95% of the time when people outside of AI hear something about AI they think of humanoid robots achieving super-intelligence (numbers made up based on my experience). I believe that this leads to a lot of unnecessary confusion and fear.
- There are several problems that are far more urgent than the threat of evil super-intelligence, such as how we are encoding racial and gender biases into algorithms (that are increasingly used to make hiring, firing, healthcare benefits, criminal justice, and other life-impacting decisions) or increasing inequality (and the role that algorithms play in perpetuating & accelerating this).
Also, last year I wrote a blog post of Credible sources of accurate information about AI geared towards a general audience, which may be a handy resource for people looking to learn more.
— Tess Posner (@tessposner) March 27, 2018
Q: What can AI researchers do to improve accessibility?
A: My wish-list for AI researchers:
- Write a blog post to accompany your paper (some of the advice we give fast.ai students about reading academic papers is to first search if someone has written a blog post version)
- Share your code.
- Try running your code on a single GPU (which is what most people have access to). Jeremy and I sometimes come across code that was clearly never run on just one GPU.
- Use concrete examples in your paper. I was recently reading a paper on fairness that was all math equations, and even as someone with a math PhD, I was having trouble mapping what this meant into the real world.
Q: I recently taught a course on deep learning and had all the students do their own projects. It was so hard. The students couldn’t get their models to train, and we were like “that’s deep learning”. How are you able to teach this with fast.ai?
A: Yes, deep learning models can be notoriously finicky to train. We’ve had a lot of successful student projects come out of fast.ai, and I believe some of the factors for this are:
- The fast.ai course is built around a number of practical, hands-on case studies (spanning computer vision, natural language processing, recommendation systems, and time series problems), and I think this gives students a good starting point for many of their projects.
- We’ve developed the fastai library with the primary goal of having it be easy for students to use and to apply to new problems. This includes innovations like a learning rate finder, setting good defaults, and encoding best practices (such as cyclical learning rates).
- fast.ai is not an education company; we are a research lab, and our research is primarily on how to make deep learning easier to use (which closely aligns with the goals of making deep learning easier to learn).
- All this said, I do want to acknowledge that deep learning models can still be frustrating and finicky to train! I think everyone working in the field does routinely experience these frustrations (and I look forward to this improving as the field matures and advances).
Q: What do you mean by accessibility in AI? And why is it important?
A: By accessibility, I mean that people from all sorts of backgrounds: education, location where they live, domain of expertise, race, gender, age, and more should all be able to apply AI to problems that they care about.
I think this is important for two key reasons:
- On the positive side, people from different backgrounds will know and care about problems that nobody else knows about. For instance, we had a student this fall who is a Canadian dairy farmer using deep learning to improve the health of his goats’ udders. This is an application that I never would have thought about.
- People from different backgrounds are necessary to try to catch biases and negative impacts of AI. I think people from under-represented backgrounds are most likely to identify the ways that AI may be harmful, so their inclusion is vital.
I also want people from a variety of backgrounds to know enough about AI to be able to identify when people are selling snake oil (or just over-promising on what they could reasonably deliver).
Currently, for our Practical Deep Learning for Coders course, you need 1 year of coding experience to be able to learn deep learning, although we are working to lower that barrier to entry. I have not checked out the newest versions of deep learning SAAS APIs (it’s on my to-do list) that purportedly let people use deep learning without knowing how to code, but the last time I checked, these APIs suffered from several short-comings:
- not truly state-of-the-art results (for this, you needed to write your own code), which is what most people want
- only worked on fairly limited problems, or
- in order to effectively use the API, you needed to know as much as what it takes to write your own code (in which case, people prefer to write their own, since it gives them more control and customization).
While I support the long-term vision of a machine learning API that anyone can use, I’m skeptical that the technology is there to truly provide something robust and performant enough to eliminate code yet.
Q: Do we really need everyone to understand AI? If the goal is to make an interface as user-friendly as a car, should we just be focusing on that? Cars have a nice UI, and we can all drive cars without understanding how the engine works. Does it really matter who developed cars initially?
A: While I agree with the long-term goal of making deep learning as easy to use as a car, I think we are a long way from that, and it very much matters who is involved in the meantime. It is dangerous to have a homogeneous group developing technology that impacts us all.
For instance, it wasn’t until 2011 that car manufactures were required to use crash test dummies with prototypical female anatomy, in addition to the “standard” male test dummies (a fact I learned from Datasheets for Datasets by Timnit Gebru, et al., which includes fascinating case studies of how standardization came to the electronics, automobile, and pharmaceutical industries). As described in the paper, “a safety study of automobiles manufactured between 1998 and 2008 concluded that women wearing seat belts were 47% more likely to be seriously injured than males in similar accidents”.
Extending the car analogy further, there are many things that are sub-optimal about our system of cars and how they developed: the USA under-investment in public transportation, how free parking led to sprawl and congestion, and the negative impacts of going with fossil fuels over electric power. Could these things have been different if those developing cars had come from a wider variety of backgrounds and fields?
And finally, returning to the question of AI usability and access, it matters both:
- Who helps create the abstractions and usability interfaces
- Who is able to use AI in the meantime.
Q: Is it really possible for people to learn the math they need on their own? Isn’t it easier to learn things through an in-person class– so as a grad student at Stanford, should I take advantage of the math courses that I have access to now?
A: At fast.ai, we highly recommend that people learn math on an as-needed basis (and here and here). Feeling like you need to learn all the math you might ever need before getting to work on the topic you’re excited about is a recipe for discouragement, and most people struggle to maintain motivation. If your goal is to apply deep learning to practical problems, much of that math turns out to be unnecessary
As for learning better through in-person courses, this is something that I think about. Even though we put all our course materials online, for free, we’ve had several students travel from other countries (including Australia, Argentina, England, and India) to attend our in-person course. If you are taking a course online or remotely, I definitely recommend trying to organize an in-person study group around it with others in your area. And no matter what, seek out community online.
I have a PhD in math, so I took a lot of math in graduate school. And sadly, I have forgotten much of the stuff that I didn’t use (most of my graduate courses were taken over a decade ago). It wasn’t a good use of time, given what I’ve ended up doing now. So I don’t know how useful it is to take a course that you likely won’t need (unless you are purely taking it for the enjoyment, or you are planning to stay in academia).
And finally, in talking with students, I think that people’s anxiety and negative emotions around math are a much bigger problem than their failure to understand concepts. There is a lot broken with how math is taught and the culture around it in the USA, and so it’s understandable that many people feel this way.
This is just a subset of the many topics we discussed at the AI Salon. All questions are paraphrased as I remember them and are not exact quotes. I should also note that my co-presenter, Stanford post-doc Mark Whiting, made a number of interesting points around HCI and symbolic systems. I enjoyed the event and want to thank the organizers for having me!
Bio: Rachel Thomas is co-founder at fast.ai and a professor of the MS in Analytics program at the University of San Francisco. She is also an Ask-a-Data-Scientist Advice Columnist, a Duke Math PhD, ex-Quant, and ex-Uber Software Dev.
Original. Reposted with permission.
- Credible Sources of Accurate Information About AI
- How (and Why) to Create a Good Validation Set
- 5 Things to Know About Machine Learning
Top Stories Past 30 Days