News

In this video from 2018 Swiss HPC Conference, Torsten Hoefler from (ETH) Zürich presents: Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis. “Deep Neural Networks ...
Breakthrough in 'distributed deep learning' MACH slashes time and resources needed to train computers for product searches Date: December 9, 2019 ...
In this video, Huihuo Zheng from Argonne National Laboratory presents: Data Parallel Deep Learning. The Argonne Training Program on Extreme-Scale Computing (ATPESC) provides intensive, two weeks of ...
Each year, the Association for Computing Machinery honors a computer scientist for his or her contributions to the field. The prize, which comes with $250,000 thanks to Google and Intel, is named ...
On Oct. 16-17, some 60 Princeton graduate students and postdocs — along with a handful of undergraduates — explored the most widely used deep learning techniques for computer vision tasks and delved ...
MPI (Message Passing Interface) is the de facto standard distributed communications framework for scientific and commercial parallel distributed computing.The Intel MPI implementation is a core ...
NVIDIA’s CUDA is a general purpose parallel computing platform and programming model that accelerates deep learning and other compute-intensive apps by taking advantage of the parallel ...
He has made deep and wide-ranging contributions to many areas of parallel computing including programming languages, compilers, and runtime systems for multicore, manycore and distributed computers.