Are you monitoring your machine learning systems?
How are you monitoring your Python applications? Take the short survey - the results will be published on KDnuggets and you will get all the details.
By Robert Dempsey. Sponsored Post.
When it comes to monitoring machine learning systems, how to monitor them currently consists of gluing together many pieces of tech.
Is there a better way to hunt down and eradicate the bottlenecks in your ML systems?
There may be.
The Challenges With Monitoring a Machine Learning System
Even the simplest machine learning systems consist of many moving parts. The most basic I've built was deployed to a single server and the most complex consisted of more than 40 micro services feeding into a large processing and analysis cluster (and don’t get me started on all the ways we stored the data).
In all cases we used monitoring.
But here's the rub - the metrics we used were surface level.
To look at the performance of my code I either timing information and output it to the logs, or profiled it locally. Both solutions were non-optimal.
First I had to roll my own metrics. Second, while I develop on a Mac and deploy to Linux, there are other variables in play that could affect my application. Third, I had to create complex dashboards to get a full understanding.
These challenges are present in all Python applications I've encountered.
And that leads to my question - how are you monitoring your Python applications?
I've teamed up Scout to find out, and we need your help!
Today we're launching our Python Monitoring Survey.
Take the survey here
The survey will be open from today through the middle of February. The results will be published on this blog in the beginning of March.
Optionally add your email at the end of the survey and we'll give you all the charty goodness you can handle.
Thanks for your help!