Why Data Scientists Must Know About Change Management
Change management may be seen as an opposite to data science, but in reality both are related. Without proper implementation, a data science project fails.
By Jurjen Helmus, Amsterdam University of Applied Sciences (HvA).
This week I was invited to give a guest lecture on data science for a group of change managers. We discussed the social effects of predictive maintenance on the workforce and how to deal with the implementation of this concept from a change management perspective. At the end of the lecture I came to realize that it's the data scientist who should know about change management rather than the other way around. This might make the implementation of the improvements found by data scientists much more efficient since this tackles behavioral change: One of the biggest hurdles in implementation of our ideas and realizing our goals.
The model of change management
Change management may be seen as a domain opposed to data science. Data science is hard, change is soft. Change management is not about the solution of messy problems but about the process of change, data science is all about the solution after getting our the messy data. One of the core ideas of change management is that the change comes as part of a process and not at a sudden, let alone predicted, time. Change comes from within peoples' acceptance to change.
One of the models of change management that we discussed during my lecture states that "Change is the result of a change in behavior due to the reaction on a new reality that exists due to interventions." Change is a result not the starting point. Goals are what you are heading for and change is needed to reach those. But to reach it one must think retroactive, back to the point where interventions need to be implemented to create the new reality.
And that is what I realized: creating a new reality is typically what we data scientists are doing - based on our insights on how a system behaves - based upon a dataset.
A successful change implementation however does not start with the new reality: it starts out working with the ones that are to react on this new reality. I conclude that is might be effective for us data scientists to embrace the concept of change management and seek collaboration with this methodology before implementing our results.
The case - predictive maintenance
Let me illustrate my ideas by elaborating on a specific case. Let us imagine a factory producing a not-too-complex product. Maintenance and overhaul is performed by the technical department. Every day coworkers on production line and employees from maintenance department work with the machinery and as such they know and sense the state of the devices. Take James, a 45 year old maintenance engineer who has worked at this company for more than 25 years. He feels whether maintenance of a machine is needed simply by twisting bolts and arms. Of course machinery sometimes fails, but hey - it happens.
Now predictive maintenance comes in. Data scientists take their seats and start working on prediction of upcoming maintenance before anyone could sense that the machinery would fail. Note that the intention to start doing data science within the organization is an intervention in itself. At the end of the project indeed for many machines a predictive model is created that predicts failure of a component 3 weeks ahead with 97% accuracy. Predictions say that a reduction of production failure of 85% is in reach.
The challenge - implementation
Now this is where change management comes in. What will change after implementation? Behavior may need to change, but there is a bigger issue: Predictive maintenance is a change of reality. In the old reality machinery was put to maintenance at the point that something was bound to fail, or indeed in case of preventive maintenance, machinery was put to maintenance periodically. In the new reality, machinery is potentially put up for servicing by an algorithm even before human sense skills could do so. In the new reality things are fixed that aren't broken yet.
This may ignite questions in technical coworkers such as “are my skills still in demand?"
Successful implementation of predictive maintenance thus is the result of a change in behavior due to the reaction on a new reality. The challenge is to not only focus on the core data science project but get the new reality accepted. This requires a range of interventions parallel or even before data science kicks in.
Best practices for interventions
So what are best practices of change management and how should they be used? The first thing to acknowledge is that the change starts even before the data scientists start their analysis. Any project that contains smart predictive algorithms, intervenes with smart and skilled employees. Therefore, engage with these employees before showing any results. Project managers may assume that people want to contribute to something bigger than themselves. In case of predictive maintenance, striving for zero production failures due may inspire more than it repels.
Next thing is to realize that domain knowledge needs to be part of a data science project. Therefore, engage employees from the work floor in the project, let them share what they know about the machinery or any subject under analysis. In doing so, your employees' skills get acknowledged. They know that you know that they are skilled. The importance of being heard cannot be underestimated.
Also, during the project at and the end of the project test with your skilled employees. Challenge your employees to get acquainted with the prediction algorithms. Moreover, let them compete men versus machine: Who knows which machines needs service the most efficient.
Furthermore, focus on the bigger picture. implementing data science in any company is not about ousting employees, it is about doing the things we do more effective and more efficient. My expectation is that there will always be a need for skilled employees after implementation.
Bio: Jurjen Helmus (1980) works at the Amsterdam University of Applied Sciences (HvA). He coordinates the minor Big data in Urban Technology. In addition, he is conducting a PhD research at the University of Amsterdam, where he builds agent based simulation models for charge infrastructure electric vehicles.
- Deep Feature Synthesis: How Automated Feature Engineering Works
- Accelerating Algorithms: Considerations in Design, Algorithm Choice and Implementation
- 7 Super Simple Steps From Idea To Successful Data Science Project