KDnuggets Home » News » 2017 » Jun » Tutorials, Overviews » Open Innovation and Crowdsourcing in Machine Learning – Getting premium value out of data ( 17:n24 )

Open Innovation and Crowdsourcing in Machine Learning – Getting premium value out of data


Recently, PSL Research University launched a one-week course combining theoretical lectures and practical sessions. 115 students from various backgrounds and skill levels were enrolled; something quite spectacular happened during the week: Students have achieved an astounding level of score improvement - in just three afternoons.



By Akın Kazakçı, co-creater of RAMP.

At the end of last March, PSL Research University launched its first edition of Large-Scale Machine Learning, a one-week course combining theoretical lectures and practical sessions. 115 students from various backgrounds and skill levels were enrolled to improve their skills in machine learning.

Something quite spectacular happened during the week: Students have achieved an astounding level of score improvement on a highly complicated machine learning problem - in just three afternoons.

They achieved scores that improved more than 70% over the initial solution that were built by a team of experienced domain specialists and senior data scientists (figure 1).


Figure 1. Prediction score over the time. Circles correspond to solutions submitted by the participants. The objective is to minimise the score; which went from 0.12 to 0.03.

Considering that roughly half of the students had no prior exposure to machine learning, and that the other half were mostly beginners, these improvements are impressive. In fact, this is not the first time we observed this kind of results: every time we ran a data challenge using RAMP (rapid analytics and model prototyping) platform, major improvements have been made over the initial solution.

So, how does this happen?

And more importantly, is this repeatable - for solving significant scientific challenges and business problems?

Several points should be understood in order to see the twist RAMP brings to the crowdsourcing in data science and why this approach has been efficient in solving machine learning problems.
 

Fact1. Model development in machine learning is an experimental process

 
The most important characteristic of a prediction model is its quality (i.e., predictive accuracy score). Unfortunately, for any given data set, it cannot be known in advance what kind of techniques and algorithms will yield the best possible model quality.

This creates enormous complexity*: Which approach to try first? With what parameters? What transformations should be applied to the data first?

Unfortunately, no theory exists that can guide these critical choices. Machine learning practice remains mostly an experimental process where lack of time, knowledge, and resources force you to cut down on the experiments.

Corollary: the overwhelming majority of the business world operates with under-performing prediction models, without even suspecting it.

Corollary: the overwhelming majority of the business world operates with under-performing prediction models, without even suspecting it. In mission-critical applications of predictive analytics, this is destructive - as competitive advantage is abandoned inadvertently.
 

Fact2. It’s (also) a game of numbers - building solution diversity through crowdsourcing

 
No matter how much a data scientist knows about machine learning or the specific problem domain, there will always be somebody who will try to figure out something she did not try.


Figure 2. A 2D projection of prediction profiles (each circle represent a model submitted to the system). Nearby points correspond to models with similar predictions. The size corresponds to model quality. It can be seen that a large diversity of models was produced, as the space of models were explored.

This is reminiscent of two well-known phenomena from chess playing. In the einstellung effect the player tends to use same methods she always used without trying to see if there are better ones for the current situation. In chess blindness, there might be a clearly advantageous (or disadvantageous) move that can easily be seen by everyone, except the player, who might be tired or overloaded with all the cognitive effort. These and other cognitive difficulties lead isolated data scientists to be trapped within their own frames by getting fixated.

A good strategy to circumvent these difficulties is tointroduce parallelism in search - and to build diversity. In other terms, using dozens, if not hundreds, of data scientists to multiply the entry points for exploring the solution space with a broader coverage.

It is well-known, since the pioneering work of Osborn on brainstorming (in 1953), that in problem solving tasks, quantity breeds quality. And model development in data science is a textbook problem solving (search & optimisation) task in an infinite solution space.

PSL students submitted more than 450 algorithms in just 3 afternoons** - going beyond the diversity that a typical data science team can possibly produce on their own in such a short time (figure 2).
 

Fact 3. Numbers are not enough - propagating good ideas

 
Bourdreau and Lakhani’s 2014 demonstrate that competition boosts group productivity and the diversity of the explored solutions. That is why the first “phase” of a RAMP is ‘closed’, that is, the participants cannot see each other’s solutions.***


Figure 3. An idea from a participant (Pierre) inspires several other participants, forming the basis of dozens of new and better models. From high-energy physics anomaly detection RAMP.

In a pure competition, all you get is a winner, not a workable solution.

We also know, both from theory and from practice, that this is not enough. In a pure competition, all you get is a winner, not a workable solution. In the HiggsML challenge, the winner was a brilliant data scientists but the organisers were never able to use his code, not even able to compile it after 2 weeks of joint work with the winner.

As I argued in my own research, in a pure competition, another and arguably more important issue is the lack of synergy and collaboration between participants. Hundreds of ideas produced by the participants are wasted unless a mechanism to propagate good ideas is introduced into the setup. Otherwise no synergy exists between the participants.

That’s why RAMP has a second open and collaborative phase where participants who already made an intense effort to understand the problem are now ready to see the value in the ideas of others, get inspired, and contribute to each others productivity.

With a collaborative phase, good ideas propagate quickly. Participants [...] see the value in the ideas of others, get inspired, and contribute to each others productivity.

Figure 3 is a snapshot taken from a visualisation tool we developed to understand how participants interact and influence each other. The snapshot here depicts the relationships (represented as links) between solutions submitted by the participants during a RAMP. On this picture, we can see, for instance, that a participant named Pierre has been very influential in subsequent submissions.
 

Where can RAMP be used in business contexts?

 
Through a dozen of successful iterations on difficult problem sets, our RAMP approach has demonstrated that it can help improving model performance in a short period of time. Thus, RAMP is perfectly suited to business contexts where prediction errors immediately incur value loss.

RAMP is perfectly suited to business contexts where prediction errors immediately incur value loss.

Typically, companies where prediction models actively used can right away benefit from the RAMP approach. This includes entire industries such as banking, finance, insurance, retail and telecom, where there exists a wide range of applications including lead scoring, compliance, customer attrition, safety analysis, fraud analysis and lifetime value of a customer. In all such applications, getting 5% more out of a prediction model has immediate return on investment.

Others, who are just beginning or are in mid-way through their digital transformation efforts should give priority in finding one or two critical applications, as emphasized by Gartner's latest analyses: Organizations seeking to drive digital innovation with this trend should evaluate a number of business scenarios in which AI and machine learning could drive clear and specific business value and consider experimenting with one or two high-impact scenarios (Gartner, 2017)

This is a complicated journey that should be accompanied by appropriate methodologies, part of which can be innovation methods and design theory. If, nevertheless, you need a rule of thumb about where machine learning can be applied in your business setting, remember: you can use machine learning everywhere you or your analysts are using linear regression or simple hand-crafted decision rules on a value-intensive problem. There is great chance that machine learning, even in its simpler form, can get more out of your data.

If you are interested in applying RAMP approach, please get in touch.


* The popular scikit-learn library we use, contains dozens of different regression and classification techniques. When a method is chosen the data scientist then needs to select hyper-parameters and tune the model’s performance, which increases the possible combinations from dozens to hundreds, or even thousands. Add to that transformations that you can apply to your initial variables - an infinite space that can only be tamed with human intuition, based on a comprehension of the problem at hand, that should go beyond what is currently possible with AI methods.

**Contrary to the usual practice in data challenge competitions, RAMP system gathers executable code, rather than numerical vectors. If an algorithm is successful, it can immediately be deployed in production.

***Though, they do see the leaderboard and the group’s overall progress and their position with respect to it.

 
Bio: Akın Kazakçı is an Associate Professor at MINES ParisTech. He is specialised in Design Theory and Innovation Management. He holds an B.S in Industrial Engineering and Operations Research and a PhD in Computer Science. His latest research focus on digital transformation of companies and data science process management.

Original. Reposted with permission.

Related: