News

In this video from 2018 Swiss HPC Conference, Torsten Hoefler from (ETH) Zürich presents: Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis. “Deep Neural Networks ...
Breakthrough in 'distributed deep learning' MACH slashes time and resources needed to train computers for product searches Date: December 9, 2019 ...
In this video, Huihuo Zheng from Argonne National Laboratory presents: Data Parallel Deep Learning. The Argonne Training Program on Extreme-Scale Computing (ATPESC) provides intensive, two weeks of ...
Each year, the Association for Computing Machinery honors a computer scientist for his or her contributions to the field. The prize, which comes with $250,000 thanks to Google and Intel, is named ...
On Oct. 16-17, some 60 Princeton graduate students and postdocs — along with a handful of undergraduates — explored the most widely used deep learning techniques for computer vision tasks and delved ...
NVIDIA’s CUDA is a general purpose parallel computing platform and programming model that accelerates deep learning and other compute-intensive apps by taking advantage of the parallel ...
ADELPHI, Md. -- A new algorithm is enabling deep learning that is more collaborative and communication-efficient than traditional methods. Army researchers developed algorithms that facilitate ...
Scaling AI Isn't A Computing Problem... Dedicated hardware, like GPUs (graphics processing units) and TPUs (tensor processing units), has become essential for training AI models.
He has made deep and wide-ranging contributions to many areas of parallel computing including programming languages, compilers, and runtime systems for multicore, manycore and distributed computers.