How to apply machine learning and deep learning methods to audio analysis
Find out how data scientists and AI practitioners can use a machine learning experimentation platform like Comet.ml to apply machine learning and deep learning to methods in the domain of audio analysis.
Given the growth of automatic speech recognition, digital signal processing, musical classification, and virtual assistants, this post focuses on how data scientists and AI practitioners can use a machine learning experimentation platform like Comet.ml to apply machine learning and deep learning to methods in the domain of audio analysis.
To understand how models can extract information from digital audio signals, we'll dive into some of the core feature engineering methods for audio analysis. We will then introduce Librosa, a great python library for audio analysis, and wrap up with a short python example training several neural network architectures on the UrbanSound8k dataset.