Professor: Richard Graham (NVIDIA-Mellanox)
Horarios: Sexta 22/01 de 12:30h às 14:30h
Objetivo: The ever increasing demands for higher computation performance drive the creation of new datacenter accelerators and processing units. Previously CPUs and GPUs were the main sources for compute power. The exponential increase in data volume and in problems complexity, drove the creation of a new processing unit – the I/O processing unit or IPU. IPUs are interconnect elements that include In-Network Computing engines, engines that can participate in the application run time, and analyze application data as it being transferred within the data center, or at the edge. The combination of CPUs, GPUs, and IPUs, creates the next generation of data center and edge computing architectures. The first generations of IPUs are already in use in leading HPC and Deep learning data centers, have been integrated into multiple MPI frameworks, NVIDIA NCCL, Charm++ and others, and have demonstrated accelerate performance by nearly 10X. The session will review the IPUs state of the art, how to usage the In-Network Computing technology for acceleration HPC and AI applications and algorithms, and the roadmap for future In-Network Computing technologies.