- New Computing Paradigm for AI: Processing-in-Memory (PIM) Architecture - Oct 15, 2021.
As larger deep neural networks are trained on the latest and fastest chip technologies, an important challenge remains that bottlenecks performance -- and it is not compute power. You can try to calculate a DNN as fast as possible, but there is data -- and it has to move. Data pipelines on the chip are expensive and new solutions must be developed to advance capabilities.
AI, Hardware, In-Memory Computing, Samsung
- High-Performance Deep Learning: How to train smaller, faster, and better models – Part 5 - Jul 16, 2021.
Training efficient deep learning models with any software tool is nothing without an infrastructure of robust and performant compute power. Here, current software and hardware ecosystems are reviewed that you might consider in your development when the highest performance possible is needed.
Deep Learning, Efficiency, Google, Hardware, Machine Learning, NVIDIA, PyTorch, Scalability, TensorFlow
- AI Industry Innovation: Making the Invisible Visible - Mar 12, 2021.
AI Accelerator Festival: Hardware Acceleration for AI at the Edge . The world's only end-user led event dedicated to accelerating industries by harnessing the power of AI. March 16-19, 2021.
AI, Hardware, Industry, IoT