Herramientas para el Desarrollo de Programas Paralelos
Keywords:
OpenMP, MPI, CUDA, Parallel Computing..
Abstract
Nowadays there are several tools that allow the development of parallel programs. Each of them specializes in fully exploiting the characteristics of a particular computational architecture. This article presents three of the most popular tools for developing parallel programs: OpenMP for programming on shared memory architectures, MPI for distributed memory architectures, and CUDA for programming on graphics cards. The main characteristics of each of them are described, showing a programming example.
Downloads
Download data is not yet available.
References
[1] Robit Chandra, Leonardo Dagum, Dave Kohr, Dror Maydan, Jeff McDonald, and Ramesh Menon. 2001. Parallel programming in OpenMP. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA.
[2] William Gropp, Ewing Lusk, and Anthony Skjellum. 2014. Using MPI: Portable Parallel Programming with the Message-Passing Interface. The MIT Press.
[3] Shane Cook. 2012. CUDA Programming: A Developer’s Guide to Parallel Computing with GPUs (1st. ed.). Morgan Kaufmann Publishers Inc., San Francisco, CA, USA.
[2] William Gropp, Ewing Lusk, and Anthony Skjellum. 2014. Using MPI: Portable Parallel Programming with the Message-Passing Interface. The MIT Press.
[3] Shane Cook. 2012. CUDA Programming: A Developer’s Guide to Parallel Computing with GPUs (1st. ed.). Morgan Kaufmann Publishers Inc., San Francisco, CA, USA.
Published
2023-02-27
How to Cite
Román Alonso, G., Quiróz Fabián, J. L., Castro García, M. A., & Aguilar Cornejo, M. (2023). Herramientas para el Desarrollo de Programas Paralelos. Contactos, Revista De Educación En Ciencias E Ingeniería, (127), 28-36. Retrieved from https://contactos.izt.uam.mx/index.php/contactos/article/view/259
Section
Artículos