- ebook: Fundamentals for Efficient ML Monitoring - Dec 17, 2020.
We've gathered best practices for data science and engineering teams to create an efficient framework to monitor ML models. This ebook provides a framework for anyone who has an interest in building, testing, and implementing a robust monitoring strategy in their organization or elsewhere.
- Here’s what you need to look for in a model server to build ML-powered services - Sep 15, 2020.
More applications are being infused with machine learning while MLOps processes and best practices are becoming well established. Critical to these software and systems are the servers that run the models, which should feature key capabilities to drive successful enterprise-scale productionizing of machine learning.
- Monitoring Apache Spark – We’re building a better Spark UI - Jul 23, 2020.
Data Mechanics is developing a free monitoring UI tool for Apache Spark to replace the Spark UI with a better UX, new metrics, and automated performance recommendations. Preview these high-level feedback features, and consider trying it out to support its first release.
- Nitpicking Machine Learning Technical Debt - Jun 8, 2020.
Technical Debt in software development is pervasive. With machine learning engineering maturing, this classic trouble is unsurprisingly rearing its ugly head. These 25 best practices, first described in 2015 and promptly overshadowed by shiny new ML techniques, are updated for 2020 and ready for you to follow -- and lead the way to better ML code and processes in your organization.
Pages: 1 2
- Observability for Data Engineering - Feb 10, 2020.
Going beyond traditional monitoring techniques and goals, understanding if a system is working as intended requires a new concept in DevOps, called Observability. Learn more about this essential approach to bring more context to your system metrics.
- The Ultimate Guide to Model Retraining - Dec 16, 2019.
Once you have deployed your machine learning model into production, differences in real-world data will result in model drift. So, retraining and redeploying will likely be required. In other words, deployment should be treated as a continuous process. This guide defines model drift and how to identify it, and includes approaches to enable model training.
- Monitoring Models at Scale - Nov 7, 2019.
Catch this Domino webinar on monitoring models at scale, Dec 11 @ 10am PT, covering detecting changes in pattern of real-world data your models are seeing in production, tracking how model accuracy and other quality metrics are changing over time, and getting alerted when health checks fail so that resolution workflows can be triggered.
- Upcoming Webinar, Machine Learning Vital Signs: Metrics and Monitoring Models in Production - Oct 11, 2019.
In this upcoming webinar on Oct 23 @ 10 AM PT, learn why you should invest time in monitoring your machine learning models, the dangers of not paying attention to how a model’s performance can change over time, metrics you should be gathering for each model and what they tell you, and much more.
- How to Monitor Machine Learning Models in Real-Time - Jan 18, 2019.
We present practical methods for near real-time monitoring of machine learning systems which detect system-level or model-level faults and can see when the world changes.
- Are you monitoring your machine learning systems? - Jan 18, 2018.
How are you monitoring your Python applications? Take the short survey - the results will be published on KDnuggets and you will get all the details.
- Brain Monitoring with Kafka, OpenTSDB, and Grafana - Aug 5, 2016.
Interested in using open source software to monitor brain activity, and control your devices? Sure you are! Read this fantastic post for some insight and direction.
Pages: 1 2 3