Build Your Own Audio/Video Analytics App With HPE Haven OnDemand – Part 2
In the conclusion to this two part tutorial, learn how to leverage HPE Haven OnDemand's Machine Learning APIs to build an audio/video analytics app with minimal time and effort.
Instant search function
We will let users search for a particular word from within the selected media content either by typing the word to the “instant search” box or by double clicking on a word from the rich text field (double click search is only supported in Windows/Windows phone app). This process is pretty simple. When we have the searching word, all we need to do is search for that word from the entire text array. Then, if the searching word is found, we will use the position (index) of that word from the text array to get the timestamp of that word from the offset array. And finally set the media player current position with the value of the timestamp we just found. See detailed implementation from the instantSearchText() function.
Listing key concepts and find similar documents from Haven OnDemand public indexes
We will show our audience a list of key concepts found from the selected media and allow them to click on any concept to get reference articles related to the concept. To do this, we will use a WebView component to display key concepts in an HTML document format. We will place every concept under a hyperlink tag and assign the href attribute with the concept. See example below:
When the concept link is clicked, we will detect if the href attribute contains the “hod_link” keyword then decide to call the Find Similar API to find similar documents from Haven OnDemand public database. We also need to detect the language of the media content so we can specify relevant document channels for that language. Click here to see a list of public databases from Haven OnDemand.
From the code above, we specify the public indexes based on the media content language and we pass the selected concept to the text field. We also use the “print_fields” to specify the return data we are interested at.
- Title: the title of a document if found
- Weight: the relevant score of the related document
- Summary: the short description of the related document
When the API returns the result, we parse the response and list all relevant items with the information above. For example, when the user clicks on the “human rights” concept:
Title: Keep Up The Fight. Simonetta Lein The Wishmaker Meets Human Rights Activist Sara Baresi
Title: The Alchemy of Business & Human Rights (Part II): A Pendulum Swing?
The user then can click on the website link to launch the Web browser and open the link to that article.
Highlight opinions or human sentimental statements
We will help the audience easily find out positive and negative sentimental statements mentioned in the media content by calling the Sentiment Analysis API. Positive statements will be highlighted in green color and negative statements will be in red color. We will also underline the sentiment words if they exist.
To do that, we call the API with the whole text string of the media content and specify the language code. Because the language code we defined for the Speech Recognition API and stored in our text index database is different from the language code required by the Sentiment Analysis, we will need to define the LanguageCollection dictionary and make a translation to get the language code for Sentiment Analysis API.
When the API returns the result, we parse the response and display the content in an HTML format document where we highlight positive and negative sentimental statements. See theparseSentimentAnalysis() function for more details.
Find interested entities
The media content may contain interesting data such as names of famous people, names of famous places, or names of famous companies. We will call the Entity Extraction API to extract those entities (see alist of supported entities from Entity Extraction API) from the media content and provide the users with additional information about the entities. For example, for a famous person, the API will give a quick profile of that person such as a list of professions, the birthday, the image and the link to a person’s Wikipedia page.
The code above shows us how the Entity Extraction API can be used to find famous people, places and companies. We just need to pass the entire text string of the media content to the “text” parameter and specify a list of interested entities. When the API returns the result, we parse the response and display the content in an HTML format document. See the parseEntityExtraction() function for more details. You may want to define a custom entity extraction dataset to suit your use case. While there is currently no way for you to train the entity extraction API yourself, you can use the categorization API for custom entity extraction.
Congratulations! Now you should be able to build and further develop the application with even more advanced features if you want to. For example, you may want to categorize the media content based on their actual text content by using Haven OnDemand document categorization API. Then you can list media content based on their categories, or extract interesting entities from a media content based on the content category by providing relevant entities to the Entity Extraction API. For each media transcription, you could use the Find Similar API to help users identify media with similar content. I will leave this open to your practical and innovative choices to further exploit the full capabilities of Haven OnDemand platform.
Original. Reposted with permission.