Drag, Drop, Analyze: The Rise of No-Code Data Science

No-code or low-code functionalities in data science have gained significant traction in recent years. These solutions are well-proven and matured, and they make data science more accessible to a wider range of people.



Drag, Drop, Analyze: The Rise of No-Code Data Science
Image generated with DALLE 3

 

One of the challenges that data practitioners face is having to code everything from scratch for every new use case. This can be a time-consuming and inefficient process. no-code or low-code solutions help data scientists create reusable solutions that can be applied to a wide range of use cases. This can save time and effort and improve the quality of data science projects.

You can do almost everything in data science without writing a single line of code. "No-code or low-code solutions are the future of data science," commented Ingo Mierswa, SVP of Product Development at Altair and founder of RapidMiner, a data science platform. As an established inventor in and of the No code data science field, his expertise and contributions have influenced the adoption and implementation of these functionalities in the industry. "These functionalities," Mierswa during our interview call remarked, "make it possible for people without a lot of programming experience to build and deploy data science models. This can help to democratize data science and make it more accessible to everyone."

"There was no no-code or low-code platform out there when I found myself being a computer scientist that I kind of recreated, very similar solutions for every new use case. It was an inefficient process, which felt like a huge waste of time," Mierswa shares. Humoring with the basics, he articulated, "If you solve a problem for the second time and you are still coding, it means that you did not solve it correctly the first time. You should have created a solution that can be reused to solve the same or similar problems over and over again. "People, he asserts, "often don't realize how similar their problems are, and as a result, they end up coding the same thing repeatedly. "The question they should be asking is, 'Why am I still coding?' Perhaps they shouldn't in order to save time and effort."

 

Diverse Acceleration

 

No-code or low-code data science solutions can be very rewarding. "The first and most important benefit is that they can lead to better forms of collaboration," Mierswa underscores. "Everyone can understand visual workflows or models if they are explained, however, not everyone is a computer scientist or programmer, and not everyone can understand code." So, in order to collaborate effectively, you need to understand what assets the team is collectively producing. "Data science is, at the end of the day, a team sport. You need people who understand the business problems, whether or not they can code, as coding may not be their daily business."

Then you have other people who have access to data, who are saturated in computational thinking, who think like, "Okay, well, look, if I want to build, for example, some machine learning model, I need to transform my data in a specific way." That's a great skill, and they need to collaborate too, but again, for skills like that, we know ETL products have been out for ages. "Yes, in rare cases, in special, very custom situations, you still need to code. Even in those situations, that's the one percent exception," Mierswa pointed out. "It shouldn't be the norm, but the real magic happens when you bring together all the different skills, data, people, and expertise."

"You will never see that with a pure code-based approach. You will never get the buy-in from stakeholders. That often leads to what I call dead projects. We should be treating data science as a solution for problems. We should not treat it as a scientific approach, where it doesn't matter if we actually create a solution or not." Mierswa reasoned. "It matters. We are solving multi-million dollar business problems here. We should actually work towards the working solution, getting the buy-in, get it deployed, and really improve our situation here. Not saying, 'Yeah, I know what if it fails, I don't care.' So collaboration is a huge benefit," he affirmed.

Acceleration is another one, Mierswa explains. When you do repetitive tasks by coding, you're not working in the fastest possible way. If I create, for example, a RapidMiner workflow consisting of five or ten operators, that often is the equivalent of thousands of lines of code. Copying and pasting code can slow you down, but low-code platforms can help you create custom solutions faster.

Accountability, often easily overlooked, is the most important benefit. When you create a code-based solution, it can be difficult to track who made changes and why. "This can lead to problems when someone else needs to take over the project or when there is a bug in the code. On the other hand, low-code platforms are self-documenting. This means that the visual workflows that you create are also accompanied by documentation that explains what the workflow does. "This makes it easier to understand and maintain the code, and it also helps to ensure accountability," Mierswa said. "People understand it. They buy into this, but they also can take ownership of those results. Collectively, as a team."

 

Open Ecosystem

 

The torrent of AI advancements is transforming the data science landscape, and companies that want to stay ahead of the curve are open, using open source and open standards, and not hiding anything that is very important in the data science market.

Companies that have remained open have had a winning position because the market moves quickly and requires constant iteration. "This is true for the overall data science market over the past 10 to 20 years," Mierswa reflected, "the fast-paced nature of the market requires constant iteration, making it exceedingly unwise to close down the ecosystem. This is part of why some companies that have traditionally been closed have opened up and even adopted a vendor-neutral approach to support more programming languages and integrations."

While the code-optional approach allows researchers to perform complex data analysis tasks without writing a single line of code, there are situations where coding may be necessary. In such cases, most low-code platforms integrate with programming languages, machine learning libraries, and deep learning environments. They also offer users the ability to explore the marketplace for third-party solutions, Mierswa specified. "RapidMiner even provides an operator framework that allows users to create their own visual workflows. This operating framework makes it easy to extend and reuse workflows, providing a flexible and customizable approach to data analysis."

 

The Path Ahead

 

Altair, a leader in computational science and AI, conducted a survey that revealed the widespread adoption of data and AI strategies in organizations worldwide.

The research, which involved over 2,000 professionals from various industries and 10 different countries, revealed a significant failure rate (ranging from 36% to 56%) for AI and data analytics projects when there is friction between different departments within an organization.

The study identified three main sources of friction that hinder the success of data and AI projects: organizational, technological, and financial.

  • Organizational friction arises from challenges in finding qualified individuals to fill data science roles and a lack of AI knowledge among the workforce.
  • Technological friction stems from limitations in data processing speed and issues with data quality.
  • Financial friction is caused by constraints in funding, a focus on upfront costs by leadership, and the perception of high implementation costs.

James R. Scapa, founder and CEO of Altair, in the news release emphasized the importance of organizations leveraging their data as a strategic asset to gain a competitive edge.

Friction paralyzes mission-critical projects. To overcome these challenges and achieve what Altair terms as 'Frictionless AI,' businesses must adopt self-service data analytics tools. These tools," Scapa highlights, "empower non-technical users to navigate complex technology systems easily and cost-effectively, eliminating the friction that hampers progress."

He also acknowledged that obstacles exist in the form of people, technology, and investment, hindering organizations from harnessing data-driven insights effectively. And by closing the skill gaps, organizations can help build sound knowledge between cross-functional teams to overcome friction.
 
 

Saqib Jan is a writer and technology analyst with a passion for data science, automation, and cloud computing.