Deep Learning is constantly evolving at a fast pace. New techniques, tools and implementations are changing the field of Machine Learning and bringing excellent results.
D3 is a JavaScript library that continues to grow, both in terms of popularity and possibilities, capable of creating dynamic, interactive visualisations. This tutorial provides a step-by-step guide on how to create a basic bar chart in d3, populated with data from a csv file.
Learn how make great visualizations using Dash with advanced data visualization workshops for Dash, R, Shiny and Dash R from April 14–15 in Boston, featuring Chris Parmer, the creator of Dash and co-founder of Plotly. Use code KDNUGGETS for 20% off.
State-of-the-art Semantic Segmentation models need to be tuned for efficient memory consumption and fps output to be used in time-sensitive domains like autonomous vehicles.
Tensorflow recently added new functionality and now we can extend the API to determine pixel by pixel location of objects of interest. So when would we need this extra granularity?
Newer, advanced strategies for taming unstructured, textual data: In this article, we will be looking at more advanced feature engineering strategies which often leverage deep learning models.
Also: Introduction to k-Nearest Neighbors; Descriptive Statistics: The Mighty Dwarf of Data Science; 8 Common Pitfalls That Can Ruin Your Prediction; Will GDPR Make Machine Learning Illegal?; Top 20 Python AI and Machine Learning Open Source Projects
A Rosetta Stone of deep-learning frameworks has been created to allow data-scientists to easily leverage their expertise from one framework to another.
We take a look at the important things you need to know about sentiment analysis, including social media, classification, evaluation metrics and how to visualise the results.
Recently [we] were analyzing how different activation functions interact among themselves, and we found that using relu after sigmoid in the last two layers worsens the performance of the model.
A good prediction can help your work and make it easier. But how can you be sure that your prediction is good? Here are some common pitfalls that you should avoid.
This post is a short introductory overview of 12 Unix-like operating system command line tools of value to data science tasks, and the data scientists who perform them.
We examined 140 frameworks and distributed programing packages and came up with a list of top 20 distributed computing packages useful for Data Science, based on a combination of Github, Stack Overflow, and Google results.
No other mean of data description is more comprehensive than Descriptive Statistics and with the ever increasing volumes of data and the era of low latency decision making needs, its relevance will only continue to increase.
We highlight recent developments in machine learning and Deep Learning related to multiscale methods, which analyze data at a variety of scales to capture a wider range of relevant features. We give a general overview of multiscale methods, examine recent successes, and compare with similar approaches.
Credit risk analytics in R will enable you to build credit risk models from start to finish, with access to real credit data on accompanying website, you will master a wide range of applications.
To help you become a Data Scientist, we put together a guide with answers to: how do you break into the profession? What skills do you need to become a data scientist? Where are best data science jobs?
The fast.ai library is a collection of supplementary wrappers for a host of popular machine learning libraries, designed to remove the necessity of writing your own functions to take care of some repetitive tasks in a machine learning workflow.
In this article, we show how to use Python libraries and HTML parsing to extract useful information from a website and answer some important analytics questions afterwards.
Google CoLaboratory is Google’s latest contribution to AI, wherein users can code in Python using a Chrome browser in a Jupyter-like environment. In this article I have shared a method, and code, to create a simple binary text classifier using Scikit Learn within Google CoLaboratory environment.
In this post, I share more technical details on how to build good data pipelines and highlight ETL best practices. Primarily, I will use Python, Airflow, and SQL for our discussion.
Does GDPR require Machine Learning algorithms to explain their output? Probably not, but experts disagree and there is enough ambiguity to keep lawyers busy.
This article gives a brief introduction about evolutionary algorithms (EAs) and describes genetic algorithm (GA) which is one of the simplest random-based EAs.
Analytics is becoming important part of maintenance, with applications to analyzing part failures, using failure distributions to simulate product life, and determining the root cause of failures. We provide an overview of predictive maintenance, its usage and key issues to be considered.
I now believe that there is an art, or craftsmanship, to structuring machine learning work and none of the math heavy books I tended to binge on seem to mention this.
Choropleth maps provides a very simple and easy way to understand visualizations of a measurement across different geographical areas, be it states or countries.
There are good reasons to want to use R for text processing, namely that we can do it, and that we can fit it in with the rest of our analyses. Furthermore, there is a lot of very active development going on in the R text analysis community right now.
For the 2018 international women's day, we profile 18 inspiring women who lead the field in AI, Analytics, Big Data , Data science, and Machine Learning areas.
The best data scientists have strong imaginative skills for not just “thinking outside the box” – but actually redefining the box – in trying to find variables and metrics that might be better predictors of performance.
This post will point out 5 thing to know about machine learning, 5 things which you may not know, may not have been aware of, or may have once known and now forgotten.
Major technological advances are providing opportunities for new business models, based on blockchain, which will see an increase in the number of connected devices in our day-to-day lives.
The question has probably come up of whether it’s ever okay to offer your data-related knowledge to people or organizations for free. Does taking that approach ever benefit you?
Time series forecasting is an easy to use, low-cost solution that can provide powerful insights. This post will walk through introduction to three fundamental steps of building a quality model.
Fashion industry is an extremely competitive and dynamic market. Trends and styles change with the blink of an eye. Data Science can be used here on historical data to predict the trends which will be “Hot” hence potentially saving a lot of time and money.
There are many different ways to do image recognition. Google recently released a new Tensorflow Object Detection API to give computer vision everywhere a boost.
There are many wonderful things about data science. It’s extreme breadth is not one of them. The title of data scientist means something different at every company