28/10/2011: Accelerating SVMs by integrating GPUs into MapReduce Clusters - Sergio Herrero
Accelerating SVMs by integrating GPUs into MapReduce Clusters
October 28, 2011, 10:00-10:45, CAB H 52, ETH Zurich
Massachusetts Institute of Technology, USA
The uninterrupted growth of information repositories has progressively lead data-intensive applications, such as MapReduce-based systems, to the mainstream. The MapReduce paradigm has frequently proven to be a simple yet ﬂexible and scalable technique to distribute algorithms across thousands of nodes and petabytes of information. Under these circumstances, classic data mining algorithms have been adapted to this model, in order to run in production environments. Unfortunately, the high latency nature of this architecture has relegated the applicability of these algorithms to batch-processing scenarios. In spite of this shortcoming, the emergence of massively threaded shared-memory multiprocessors, such as Graphics Processing Units (GPU), on the commodity computing market has enabled these algorithms to be executed orders of magnitude faster, while keeping the same MapReduce based model. In this work, the integration of massively threaded shared-memory multiprocessors into MapReduce-based clusters is proposed, creating a uniﬁed heterogeneous architecture that enables executing Map and Reduce operators on thousands of threads across multiple GPU devices and nodes, while maintaining the built-in reliability of the baseline system. For this purpose, a programming model that facilitates the collaboration of multiple CPU cores and multiple GPU devices was created towards the resolution of data-intensive problems. In order to prove the potential of this hybrid system, a popular NP-Hard supervised learning algorithm, the Support Vector Machine (SVM) is implemented and shown that a 36x - 192x speedup can be achieved on large datasets without changing the model nor leaving the commodity hardware paradigm.