Talks, tutorials and playlists – you could not get a more gentle introduction to Machine Learning (ML) in Finance. Got a quick 4 minutes or ready to study for hours on end? These videos cover all skill levels and time constraints!
This tutorial will lay a solid foundation to your understanding of Tensorflow, the leading Deep Learning platform. The second part shows how to get started, install, and build a small test case.
This article walks you through a step by step process and comes with starter code for building your own chatbot. In the end we also provide some pointers for folks looking to take this proof of concept to production stage.
Perhaps most significant development in IT over the past few years, blockchain has the potential to change the way that the world approaches big data, with enhanced security and data quality.
Keras is a Python deep learning library for Theano and TensorFlow. The package is easy to use and powerful, as it provides users with a high-level neural networks API to develop and evaluate deep learning models.
The second part in this series addresses group-based imputation for dealing with missing data values. Check out why finding group means can be a more formidable action than overall means, and see how to accomplish it in Python.
In machine learning, going from research to production environment requires a well designed architecture. This blog shows how to transfer a trained model to a prediction server.
Ensemble methods are meta-algorithms that combine several machine learning techniques into one predictive model in order to decrease variance (bagging), bias (boosting), or improve predictions (stacking).
This collection of data science cheat sheets is not a cheat sheet dump, but a curated list of reference materials spanning a number of disciplines and tools.
Everyone is talking about Tensorflow these days. In this multipart series, we explain Tensorflow in detail, including it’s architecture and industry applications.
Handling missing values is one of the worst nightmares a data analyst dreams of. In situations, a wise analyst ‘imputes’ the missing values instead of dropping them from the data.
Deep learning, data preparation, data visualization, oh my! Check out the latest installation of '5 Machine Learning Projects You Can No Longer Overlook' for insight on... well, what machine learning projects you can no longer overlook.
Data scientists may not be as educated or experienced in computer science, programming concepts, devops, site reliability engineering, non-functional requirements, software solution infrastructure, or general software architecture as compared to well-trained or experienced software architects and engineers.
It’s not necessary to understand the inner workings of a machine learning project, but you should understand whether the right things have been measured and whether the results are suited to the business problem. You need to know whether to believe what data scientists are telling you.
In this tutorial, we will build a neural network with Keras to determine whether or not tic-tac-toe games have been won by player X for given endgame board configurations. Introductory neural network concerns are covered.
This is the first of 3 posts to cover imputing missing values in Python using Pandas. The slowest-moving of the series (out of necessity), this first installment lays out the task and data at the risk of boring you. The next 2 posts cover group- and regression-based imputation.
This is a very basic overview of activation functions in neural networks, intended to provide a very high level overview which can be read in a couple of minutes. This won't make you an expert, but it will give you a starting point toward actual understanding.
Like many other computer vision problems, there still isn’t an obvious or even “best” way to approach the problem of object recognition, meaning there’s still much room for improvement.
K-Nearest Neighbors (K-NN) is one of the simplest machine learning algorithms. When a new situation occurs, it scans through all past experiences and looks up the k closest experiences. Those experiences (or: data points) are what we call the k nearest neighbors.
Such relational intelligence separates artificial intelligence systems with human cognition. DeepMind, the creators of AlphaGo, quietly published two groundbreaking research papers into this area, demonstrating a way to train relational reasoning using deep neural networks.
This is a summary (with links) of a three-part article series that's intended to be an in-depth overview of the considerations, tradeoffs, and recommendations associated with selecting between Python and R for programmatic data science tasks.
It seems Isaac Asimov didn’t envision needing a law to govern robots in these sorts of life-and-death situations where it isn’t the life of the robot versus the life of a human in debate, but it’s a choice between the lives of multiple humans!
Cross-validation helps to improve your prediction using the K-Fold strategy. What is K-Fold you asked? Check out this post for a visualized explanation.
This post introduces Deep Learning Pipelines from Databricks, a new open-source library aimed at enabling everyone to easily integrate scalable deep learning into their workflows, from machine learning practitioners to business analysts.
A challenging task in the past was detection of faces and their features like eyes, nose, mouth and even deriving emotions from their shapes. This task can be now “magically” solved by deep learning and any talented teenager can do it in a few hours.
This is a collection of 277 data science key terms, explained with a no-nonsense, concise approach. Read on to find terminology related to Big Data, machine learning, natural language processing, descriptive statistics, and much more.