KDnuggets Home » News » 2017 » Mar » Tutorials, Overviews » Homebrewed Deep Learning and Do-It-Yourself Robotics ( 17:n10 )

Homebrewed Deep Learning and Do-It-Yourself Robotics


 
 
http likes 42

Learn how to experiment with embodied robotic cognition with IBM Project Intu, a platform that extends Deep Learning and other cognitive services to new devices with minimum coding.



Sponsored Post.
Robots are becoming the next big thing in practically every sphere of our lives. Before long, everybody will be crafting, training, and fitting them into every possible role in our society.

And I truly mean “everybody.” As this recent Medium post indicates, the notion of “social robots” is coming into the mainstream, in the form of handy companions that we craft much the same way that pets are bred, selected, trained, groomed, and accessorized to fit their owners’ personalities and lifestyles. Of course, their embedded artificial intelligence (AI) will need to complement their owners’ and companions’ native cognitive, emotional, and sensory faculties. Some of these pet projects (pun intended) may even  result in robots suitable for consumer, business, and other mass-market applications.

Ibm Project Intu

To learn how to experiment with embodied robotic cognition, check out
this great IBM resource
about Project Intu.

Robotics democratization will spawn a maker culture in which everybody has mastered the core skills of deep learning (DL). The clear historical parallel is with the “homebrew” tech culture in the 1970s that spawned the PC industry, minted many fresh fortunes, and fundamentally reshaped the technological fabric of modern life. This new homebrew robotics maker culture will bring together data scientists, hardware engineers, creative designers, and quirky aficionados of every stripe. Many innovations in homebrew robotics may come from self-taught “citizen data scientists” who’ve decided to branch out beyond data-driven software and into the world of tangible artifacts.

As homebrew robotics come to the forefront, these inventions’ brains will come from DL technologies such as convolutional, recurrent, and other deep neural nets. However, this trend can’t achieve take-off velocity until makers have access to cheap, commoditized, open-source DL components for tinkering. The necessary components for this revolution range from the tangible (i.e., DL-optimized chipsets and motherboards, as well as the sensors, actuators, and other electromechanical devices form which robotics are configured) to the intangible (ie., the software, data, and human capital for building, training, coding, and optimizing DL-based apps). As the cost of these DL ingredients approaches zero, it will become more affordable for everyone—not just VC-backed startups and deep-pocketed corporations--to build their own smart robots.

The good news is that this affordability inflection point is fast approaching. As the title of this recent O’Reilly article makes explicit, anyone can “build a super-fast DL machine for under $1,000.” In the article, author/entrepreneur Lukas Biewald provides a detailed breakout of the costs of the necessary DL components and the steps for putting them together and animating them with data-driven neural-net intelligence. What this means is that we’re on the verge of a robotics revolution in which anyone with the basic skills, passion, and wherewithal can invent their own personal C3PO wannabe, or, at the very least a DL-driven bot-infused Internet of Things (IoT) endpoint.

Per his article, here’s what a do-it-yourselfer needs to build your robot’s DL brains:

  • Monitor, mouse, and keyboard for programming, configuring, training, monitoring, and troubleshooting the robot’s DL components
  • Motherboard with PCIe slot to plug the GPU into, two DDR4 slots to plug RAM into, and a WiFi antenna to communicate and control of the robot
  • Internal solid state drive with 250GB 2.5-inch SATA III for storing data persistently on the robot
  • Fast GPU for running the algorithmic smarts of the robot
  • Enough RAM on the GPU to fit the DL model and whatever training and operational data you need to run on the robot
  • Power supply to supply the juice that keeps the robot animated
  • Heat sink to keep the robot from frying its little DL brains out
  • Operating system, especially Ubuntu’s latest version, for running whatever DL software you’re configuring into you robot
  • Data-science notebooks for building, training, and deploying the DL models you intend to run on your robot
  • Training data—labeled or otherwise—for optimizing the DL smarts of your robot

As the homebrew robotics revolution picks up speed, inventors will depend on their component providers to package more of these capabilities on pre-built subsystems for plugging into various robot form factors. In that regard, another inflection point is on its way, judging by the fact that Google is adding AI to Raspberry Pi, while Samsung, Intel, and NXP are also moving in the direction of integrating DL into their respective commodity hardware platforms. As discussed here, Google will provide APIs so that DL services can be invoked and tweaked in whatever gadgets, such as robots, incorporate its Android technology.

As we move closer to 2020, I predict that more data-science hackathons and contests will revolve around embodied cognition in robotic devices. It’s not only one of the coolest and most entertaining set of technologies to build and demonstrate, but it has the potential to change our world in a more fundamental way than almost anything else that a data scientist—established or newbie—is ever likely to build.

For more information on how you too can experiment with embodied robotic cognition, check out this great IBM resource about Project Intu.

This is a platform that extends DL and other cognitive services to new devices bringing reasoning and learning in the physical world with minimum coding; ties together multiple services with different external sensors, actuators, and services; uses pre-trained capabilities or “behaviors” that work across various devices and operating systems enrich and create an immersive experience; and connects disparate and trained knowledge sources together and integrate 3rd-party services for robust experience. Under Project Intu, IBM provides a gateway that lets you plug your system into Watson and start playing immediately, no complex coding necessary. Project Intu helps you deliver the experiences your customers and businesses are asking for, whether it’s through an avatar, an IoT device, or your own homebrew robot.