- Super Charge Python with Pandas on GPUs Using Saturn Cloud - May 12, 2021.
Saturn Cloud is a tool that allows you to have 10 hours of free GPU computing and 3 hours of Dask Cluster computing a month for free. In this tutorial, you will learn how to use these free resources to process data using Pandas on a GPU. The experiments show that Pandas is over 1,000,000% slower on a CPU as compared to running Pandas on a Dask cluster of GPUs.
- Good-bye Big Data. Hello, Massive Data! - Oct 22, 2020.
Join the Massive Data Revolution with SQream. Shorten query times from days to hours or minutes, and speed up data preparation with - analyze the raw data directly.
- HOSTKEY GPU Grant Program - Aug 10, 2020.
The HOSTKEY GPU Grant Program is open to specialists and professionals in the Data Science sector performing research or other projects centered on innovative uses of GPU processing and which will glean practical results in the field of Data Science, with the objective of supporting basic scientific research and prospective startups.
- PyTorch Multi-GPU Metrics Library and More in New PyTorch Lightning Release - Jul 2, 2020.
PyTorch Lightning, a very light-weight structure for PyTorch, recently released version 0.8.1, a major milestone. With incredible user adoption and growth, they are continuing to build tools to easily do AI research.
- A Complete guide to Google Colab for Deep Learning - Jun 16, 2020.
Google Colab is a widely popular cloud service for machine learning that features free access to GPU and TPU computing. Follow this detailed guide to help you get up and running fast to develop your next deep learning algorithms with Colab.
- Deep Learning Breakthrough: a sub-linear deep learning algorithm that does not need a GPU? - Mar 26, 2020.
Deep Learning sits at the forefront of many important advances underway in machine learning. With backpropagation being a primary training method, its computational inefficiencies require sophisticated hardware, such as GPUs. Learn about this recent breakthrough algorithmic advancement with improvements to the backpropgation calculations on a CPU that outperforms large neural network training with a GPU.
- GPU Accelerated Data Analytics & Machine Learning - Aug 2, 2019.
The future is here! Speed up your Machine Learning workflow using Python RAPIDS libraries support.
- Easily Deploy Deep Learning Models in Production - Aug 1, 2019.
Getting trained neural networks to be deployed in applications and services can pose challenges for infrastructure managers. Challenges like multiple frameworks, underutilized infrastructure and lack of standard implementations can even cause AI projects to fail. This blog explores how to navigate these challenges.
- Here’s how you can accelerate your Data Science on GPU - Jul 30, 2019.
Data Scientists need computing power. Whether you’re processing a big dataset with Pandas or running some computation on a massive matrix with Numpy, you’ll need a powerful machine to get the job done in a reasonable amount of time.
- Nvidia’s New Data Science Workstation — a Review and Benchmark - Jul 3, 2019.
Nvidia has recently released their Data Science Workstation, a PC that puts together all the Data Science hardware and software into one nice package. The workstation is a total powerhouse machine, packed with all the computing power — and software — that’s great for plowing through data.
- Examining the Transformer Architecture – Part 2: A Brief Description of How Transformers Work - Jul 2, 2019.
As The Transformer may become the new NLP standard, this review explores its architecture along with a comparison to existing approaches by RNN.
- Mastering Fast Gradient Boosting on Google Colaboratory with free GPU - Mar 19, 2019.
CatBoost is a fast implementation of GBDT with GPU support out-of-the-box. Google Colaboratory is a very useful tool with free GPU support.
- Top KDnuggets tweets, Jan 30 – Feb 05: state-of-the-art in #AI, #MachineLearning - Feb 6, 2019.
Also Brilliant tour-de-force! Reinforcement Learning to solve Rubiks Cube; Dask, Pandas, and GPUs: first steps; Neural network AI is simple. So Stop pretending you are a genius.
- XGBoost on GPUs: Unlocking Machine Learning Performance and Productivity - Dec 7, 2018.
On Dec 18, 11:00 AM PT, join NVIDIA for a technical deep dive into GPU-accelerated machine learning, to exploring the benefits of XGBoost on GPUs and much more.
- Best Deals in Deep Learning Cloud Providers: From CPU to GPU to TPU - Nov 15, 2018.
A detailed comparison of the best places to train your deep learning model for the lowest cost and hassle, including AWS, Google, Paperspace, vast.ai, and more.
- Key Takeaways from AI Conference SF, Day 2: AI and Security, Adversarial Examples, Innovation - Oct 30, 2018.
Highlights and key takeaways from selected keynote sessions on day 2 of AI Conference San Francisco 2018.
- Key Takeaways from AI Conference SF, Day 1: Domain Specific Architectures, Emerging China, AI Risks - Oct 29, 2018.
Highlights and key takeaways include Domain Specific Architectures – the next big thing, Emerging China – evolving from copying ideas to true innovation, and Addressing Risks in AI – Security, Privacy, and Ethics.
- Kinetica: Software Engineer (Python) [Arlington, VA] - Aug 21, 2018.
Work closely with the Product Owner to build out the product in Python and integrate all other parts (TensorFlow, Kubernetes, and our GPU-powered DB) using Python bindings to build and deliver an overall product (a REST API).
- Kinetica: Sr. Software Engineer (Machine Learning) [Arlington, VA] - Aug 21, 2018.
Join an accomplished team to help build out a new scalable, distributed machine learning and data science platform with tight integrations and pipelines to a distributed, sharded GPU-powered database.
- A Crash Course in MXNet Tensor Basics & Simple Automatic Differentiation - Aug 16, 2018.
This is an overview of some basic functionality of the MXNet ndarray package for creating tensor-like objects, and using the autograd package for performing automatic differentiation.
- PyTorch Tensor Basics - May 11, 2018.
This is an introduction to PyTorch's Tensor class, which is reasonably analogous to Numpy's ndarray, and which forms the basis for building neural networks in PyTorch.
- Comparing Deep Learning Frameworks: A Rosetta Stone Approach - Mar 26, 2018.
A Rosetta Stone of deep-learning frameworks has been created to allow data-scientists to easily leverage their expertise from one framework to another.
- Score a Nvidia Titan V GPU at AnacondaCON 2018 - Mar 21, 2018.
At AnacondaCON 2018 in Austin, Apr 8-11, you'll learn how data scientists are using GPUs for machine learning across a variety of applications and industries. The best part? One lucky attendee will receive a FREE NVIDIA TITAN V GPU!
- For GPU Databases of today, the big challenge is doing JOINS - Mar 2, 2018.
While some GPU database problems have been solved, one challenge remains that only one vendor has tackled properly and that is fast SQL joins on GPU.
- Fast.ai Lesson 1 on Google Colab (Free GPU) - Feb 8, 2018.
In this post, I will demonstrate how to use Google Colab for fastai. You can use GPU as a backend for free for 12 hours at a time. GPU compute for free? Are you kidding me?
- Top KDnuggets tweets, Jan 24-30: Top 10 Algorithms for Machine Learning Newbies; Want to Become a Data Scientist? Try Feynman Technique - Jan 31, 2018.
Also: Chronological List of AI Books To Read - from Goedel, Escher, Bach ... ; Aspiring Data Scientists! Start to learn Statistics with these 6 books.
- Supercharging Visualization with Apache Arrow - Jan 5, 2018.
Interactive visualization of large datasets on the web has traditionally been impractical. Apache Arrow provides a new way to exchange and visualize data at unprecedented speed and scale.
- NVIDIA DGX Systems – Deep Learning Software Whitepaper - Nov 20, 2017.
Download this whitepaper from NVIDIA DGX Systems, and gain insight into the engineering expertise and innovation found in pre-optimized deep learning frameworks available only on NVIDIA DGX Systems and learn how to dramatically reduce your engineering costs using today’s most popular frameworks.
- 5 overriding factors for the successful implementation of AI - Oct 6, 2017.
Today AI is everywhere, from virtual assistants scheduling meetings, to facial recognition software and increasingly autonomous cars. We review 5 main factors for the successful AI implementation.
- GPU-accelerated, In-database Analytics for Operationalizing AI - Oct 2, 2017.
This blog explores how the massive parallel processing power of the GPU is able to unify the entire AI pipeline on a single platform, and how this is both necessary and sufficient for overcoming the challenges to operationalizing AI.
- Tensorflow Tutorial, Part 2 – Getting Started - Sep 28, 2017.
This tutorial will lay a solid foundation to your understanding of Tensorflow, the leading Deep Learning platform. The second part shows how to get started, install, and build a small test case.
- KDnuggets™ News 17:n32, Aug 23: The Rise of GPU Databases; Instagramming with Python for Data Analysis - Aug 23, 2017.
Also: Deep Learning and Neural Networks Primer; A New Beginning to Deep Learning; The most important step in Machine Learning process.
- The Rise of GPU Databases - Aug 17, 2017.
The recent but noticeable shift from CPUs to GPUs is mainly due to the unique benefits they bring to sectors like AdTech, finance, telco, retail, or security/IT . We examine where GPU databases shine.
- Deep Learning – Past, Present, and Future - May 2, 2017.
There is a lot of buzz around deep learning technology. First developed in the 1940s, deep learning was meant to simulate neural networks found in brains, but in the last decade 3 key developments have unleashed its potential.
Pages: 1 2
- Top KDnuggets tweets, Mar 29 – Apr 04: Free Must-Read Books for #MachineLearning; #Apache Slug, new #BigData project - Apr 5, 2017.
Also Self-driving talent is fleeing Google and Uber to catch the autonomous-driving; Using Docker, CoreOS For #GPU Based #DeepLearning; A Short Guide to Navigating the Jupyter Ecosystem.
- Data Science Deployments With Docker - Dec 1, 2016.
With the recent release of NVIDIA’s nvidia-docker tool, accessing GPUs from within Docker is a breeze. In this tutorial we’ll walk you through setting up nvidia-docker so you too can deploy machine learning models with ease.
- Parallelism in Machine Learning: GPUs, CUDA, and Practical Applications - Nov 10, 2016.
The lack of parallel processing in machine learning tasks inhibits economy of performance, yet it may very well be worth the trouble. Read on for an introductory overview to GPU-based parallelism, the CUDA framework, and some thoughts on practical implementation.
Pages: 1 2
- NVIDIA: Deep Learning Library Software Development Engineer - Oct 20, 2016.
To researchers and companies are using GPUs to power a revolution in deep learning, enabling breakthroughs in problems from image classification to speech recognition to natural language processing. Join the team which is building software which will be used by the entire world.
- Neural Designer: Predictive Analytics Software - Sep 26, 2016.
Neural Designer advanced neural network algorithms, combined with a simple user interface and fast performance, make it a great tool for data scientists. Download free 15-day trial version.
- Up to Speed on Deep Learning: July Update - Aug 29, 2016.
Check out this thorough roundup of deep learning stories that made news in July. See if there are any items of note you missed.
- How to Build Your Own Deep Learning Box - Jun 2, 2016.
Want to build an affordable deep learning box and get all the required software installed? Read on for a proper overview.
Pages: 1 2
- KDnuggets™ News 16:n13, Apr 13: Signs of a Bad Data Scientist; Deep Learning from 30,000 Feet; Analytics Survey - Apr 13, 2016.
10 Signs Of A Bad Data Scientist; Deep Learning from 30,000 feet; IDC/KDnuggets Advanced Analytics Survey - please participate; A Pocket Guide to Data Science; Basics of GPU Computing for Data Scientists
- Basics of GPU Computing for Data Scientists - Apr 7, 2016.
With the rise of neural network in data science, the demand for computationally extensive machines lead to GPUs. Learn how you can get started with GPUs & algorithms which could leverage them.
- NVIDIA: Senior Data Mining Analyst - Aug 16, 2015.
Fill a key role in our Business Planning & Analytics Team, the analytic hub of NVIDIA product marketing organization.
- Popular Deep Learning Tools – a review - Jun 18, 2015.
Deep Learning is the hottest trend now in AI and Machine Learning. We review the popular software for Deep Learning, including Caffe, Cuda-convnet, Deeplearning4j, Pylearn2, Theano, and Torch.
- Top /r/MachineLearning Posts, Mar 8-14: Word vectors, Hardware for Deep Learning, and Neural Graphics Engines - Mar 19, 2015.
Word vectors in NLP, Machine Learning's place in programming, hardware for deep learning, Machine Learning interviews, and neural graphics engines are all topics covered this week on /r/MachineLearning.
- Top /r/MachineLearning Posts, Mar 1-7: Stanford Deep Learning for NLP, Machine Learning with Scikit-learn - Mar 9, 2015.
This week on /r/MachineLearning, we have a new NLP-focused deep learning course from Stanford, an introduction to scikit-learn, visualization of music collections, an implementation of DeepMind, and NLP using deep learning and Torch.
- Top /r/MachineLearning Posts, Feb 22-28: Jurgen Schmidhuber AMA and Machine Learning Done Wrong - Mar 4, 2015.
The Jürgen Schmidhuber AMA begins taking questions, machine learning done wrong, GPUs for deep learning, Google opens its native MapReduce capabilities, and Google publishes its DeepMind paper this week on /r/MachineLearning
- Facebook Open Sources deep-learning modules for Torch - Feb 9, 2015.
We review Facebook recently released Torch module for Deep Learning, which helps researchers train large scale convolutional neural networks for image recognition, natural language processing and other AI applications.
- Associations and Text Mining of World Events - Sep 30, 2014.
Applying frequent itemset analysis to text may seem daunting, but parallel hardware and two insights open the door to theme extraction.
- CuDNN – A new library for Deep Learning - Sep 19, 2014.
Becoming more and more popular, deep learning is proved to be useful in artificial intelligence. Last week, NVIDIA’s new library for deep neural networks, cuDNN, has attracted much attention.