KDnuggets Home » News » 2018 » Jun » Opinions, Interviews » What is it like to be a machine learning engineer in 2018? ( 18:n25 )

Silver BlogWhat is it like to be a machine learning engineer in 2018?


A personal account as to why 2018 is going to be a fun year for machine learning engineers.



By Joe Isaacson, VP Engineering at Asimov.io.

2018 is quite a fun year to be a machine learning engineer (MLE). There are so many tools, platforms and resources available, MLEs can focus their time on solving problems critical to their field or company instead of worrying about building platforms and hand rolling numerical algorithms.

Google Cloud has easy means of building and deploying TensorFlow models including their new TPU support in beta, AWS has an ever evolving suite of deep learning AMIs and Nvidia has a great deep learning SDK. In parallel, Apple’s coreML and Android’s NN API make is simpler and faster to deploy models on phones; this will continue to push the boundary for developing and releasing ML apps.

Big Data Landscape 2017
With all of the above, there is healthy competition among big players in the cloud space pushing the whole ecosystem forward. And yet, most of them are finding ways to collaborate towards open standards like ONNX. Such collaboration empowers researchers and engineers to quickly prototype models and share results across platforms and languages.

Professional development and education has never been easier with the growth of open resources to share ML knowledge. ArXiv (and bioRxiv) continues to grow its adoption, Coursera courses span linear algebra, machine learning and deep learning, and countless blog posts are published everyday with amazing visualizations and interpretations of modern research. It’s a great year to learn!

That said, it’s not all sunshine and rainbows. There are still plenty of challenges in 2018, in no particular order:

There’s a heavy bias towards treating deep learning as a hammer. I frequently speak with individuals who are deep learning specialists; folks who don’t know what a SVM is, but can perfectly recall the VGGNet architecture. This is certainly not true of everyone, but I feel the average MLE solution is trending towards “how do I apply CNNs or LSTMs to this problem?”

Keeping on top of research is getting exponentially harder as the field grows. I try to set aside a few hours a week to read papers, but this is no where near sufficient to get through a tenth of the papers published. Similarly, much of the low hanging fruit in supervised computer vision and natural language understanding has been picked. There are now deployed apps for recognizing objects, synthesizing speech and translating signs in foreign languages - all of which perform at human level accuracy. This is awesome for consumers of these applications, but makes it difficult for MLEs to make step-function improvements.

MLEs still spend significant time data wrangling, that is, getting data from some raw format into a matrix. There are great tools, databases, queues and ETL frameworks to help with this, but fundamentally data wrangling still involves manually writing per-problem schemas, partitions, etc..

Large companies across technology, finance and healthcare have advantages over smaller players through their access to customer data. As education, models and software have become shared with the public, data has grown into a valuable commodity.

Privacy concerns are at an all time high. Privacy is an important aspect of any software system, but it comes with a tradeoff in model accuracy. The more MLEs know about you, the better they can recommend content, target ads, suggest healthcare treatments and drive your cars. There’s a lot of interesting research into differentially private machine learning algorithms. I’m really curious to see how this impacts the industry.

There’s a continual debate on the need for interpretability in machine learning including this panel discussion from 2017. I think it’s problem dependent. There are problems in diagnostic predictions where doctors will want to know why an agent is suggesting a treatment course. And autonomous vehicle architects may want to introspect into their models to better understand failure modes. But when, where and how to enforce interpretability is an open question.

There’s a lot of talk about two, polar opposite futures: a second AI winter and artificial general intelligence taking over the world. I won’t use this answer to comment on either, but there are MLEs who talk about both sides of both problems and it leads to continual debates. Discussing the future of ML is not inherently bad, it helps spark ethical discussions that are more relevant to today’s research and it helps the ML community reach out to a broader audience. But debates without objective data can be quite distracting and incorrectly interpreted by the wider public.

Bio Joe Isaacson is a VP Engineering at Asimov.io based in Cambridge, MA, USA. He is a machine learning leader, engineer and data scientist passionate about building scalable teams and ML software. He <3 python, search engines and recommendation systems. Original. Reposted with permission.

Related:


Sign Up