The Parallel Processing Group has been engaged in a variety of research topics in the general field of parallel and distributed systems. Some of their activities are described below. Please keep in mind that this page is still under construction.
Parallel Programming
It is widely argued that the biggest challenges in the current multicore and accelerator era do not have to do with the hardware itself, but with its software. Programming modern parallel systems is one of the hottest and most exciting topics, pursued equally by both the academic community and the computer industry. The Parallel Processing Group has a long history of research activity in the area of parallel programming models and tools for shared-memory and distributed-memory systems.
OpenMP is nowadays the de facto paradigm for programming shared-memory multiprocessor systems. With the recent additions in version 4.0 of the specifications, it aims to conquer the accelerator sector, as well. Our group has developed a lightweight and portable source-to-source compiler for OpenMP, called OMPi. The compiler takes a C program with OpenMP #pragma directives and transforms it to equivalent multithreaded C code based on POSIX (or any other type of) threads. OMPi produces code with highly competitive performance and includes a sophisticated runtime system which is quite extendible, providing tasking and unlimited multilevel parallelism support.
Some of the results of the group’s research on parallel programming have been applied to specific applications that need to leverage parallelism in order to achieve high-performance.
P2P Systems
Peer-to-peer (p2p) networks consist of a large population of networked computers that offer resources and operate in a fully decentralized manner. Such systems are usually quite large and highly dynamic, and each node (computer) has only information for a small subset of the other participating nodes. Among other things, they have been used as a popular file sharing infrastructure. Our group works on two of the key challenges present in such systems:
- Locating a desired resource (search or resource discovery).
- Creating multiple copies of resources so that peers discover them faster (replication).
We have studied analytically and experimentally both problems, and we have contributed new search and replication strategies in the context of unstructured p2p systems.
Collective Communications
Distributed-memory parallel systems are built around an interconnection network, which is used to communicate messages between the system processors/cores or nodes. In contrast to standard point-to-point (pair) communications, collective communications involve many, possibly all, nodes of the system. They refer to common communication patterns where many parties need to collectively exchange data. Typical scenarios which appear quite frequently in practice, are broadcasting (where a data item must be transmitted from a single node to all the other nodes), scattering / gathering (where a single node must send / receive a different data item to / from the other nodes), multinode broadcasting, total exchange, multicasting, etc. Collective communications have been a major feature of the Message Passing Interface (MPI) since its inception and one of the reasons for its enormous popularity.
The Parallel Processing Group has worked on all major types of collective communications, and for a variety of interconnection networks. It has contributed both analytical results and optimal or near-optimal algorithms to tackle the communication problems.