Home | english  | Impressum | Sitemap | KIT
Exploiting the Parallelism of Large-Scale Application-Layer Networks by Adaptive GPU-Based Simulation
Autor: P. Andelfinger, H. Hartenstein Links:
Quelle: Proceedings of the 2014 Winter Simulation Conference (WSC'14), Savannah, Georgia, USA, Dezember 2014
We present a GPU-based simulator engine that performs all steps of large-scale network simulations on a commodity many-core GPU. Overhead is reduced by avoiding unnecessary data transfers between graphics memory and main memory. On the example of a widely deployed peer-to-peer network, we analyze the parallelism in large-scale application-layer networks, which suggests the use of thousands of concurrent processor cores for simulation. The proposed simulator employs the vast number of parallel cores in modern GPUs to exploit the identified parallelism and enables substantial simulation speedup. The simulator adapts its configuration at runtime in order to balance parallelism and overheads to achieve high performance for a given network model and scenario. A performance evaluation for simulations of networks comprising up to one million peers demonstrates a speedup of up to 19.5 compared with an efficient sequential implementation and shows the effectiveness of the runtime adaptation to different network conditions.