Build Your Own Audio/Video Analytics App With HPE Haven OnDemand – Part 1
In this first part of a two part tutorial, learn how to leverage HPE Haven OnDemand's Machine Learning APIs to build an audio/video analytics app with minimal time and effort.
Get ready for text indexing
We can use the Text Index Services form from Haven OnDemand website to create our indexing database with the name and fields as follows:
Index name: speechindex
Parametric fields: mediatype
That should be enough information for us to create our text index database on Haven OnDemand. The rest of the indexing fields will be created automatically when we add content to our text index using JSON format. To prepare for the text content to be indexed, we create a C# class named ContentIndex as shown below:
When the response from the Speech Recognition API arrives, we will parse it and fill out the ContentIndex object with corresponding values.
Note that when we called the Speech Recognition API, the “interval” parameter was set to 0, so the response is an array of words, offset values and confidence scores. For this demo, we will ignore the confidence score. We need to join words from the content array to make a text string and we will store the text string in the “content” data member of the indexItem. Then we fill out the indexItem.offset and indexItem.text arrays with the timestamps and words from the response.
We also need some additional media metadata such as the media type, the file name, the media name which were taken from the media file. And the selected language code for the speech.
For now, it should be enough information for us to add the text content and the media’s metadata to our text index. However, we want to find the key concepts of the media content and store them in our text index so we don’t need to call the API to find key concepts every time we need them. We will discuss about the usage of the key concepts later in this blog. To get the key concepts, we will call the Concept Extraction API from the code below:
The text returned from the Speech Recognition API may contain the “<Music/Noise>” terms (as the media may contain music or noise), that is why we need to purify the text by suppressing those unwanted terms.
The only parameter we need to specify for the Concept Extraction API is the input data source. In this case, we use the “text” keyword and provide the value of the clean text string.
When we receive the response from the Concept Extraction API, we will parse it and continue to fill out the indexItem.concepts and indexItem.occurrences arrays with values:
Now let’s add the information to our text index by calling the AddToTextIndex API.
First, we create a contentIndex object then fill the document list with the indexItem object. Then, we convert the contentIndex object to a JSON string and finally we call the Add to Text Index API.
That is all what we need to extract text from a media content and add the text and metadata to the text index database. We can repeat the process every time we have a new media content uploaded to our online media gallery.
Editor's note: The conclusion of this tutorial will be posted tomorrow on KDnuggets.
Original. Reposted with permission.