So far, we have seen a number of synchronization primitives (e.g., mutex locks, semaphores, and monitors). These primitives are based on a simple fact that threads have shared variables. For example, semaphores that are being used by a number of threads must be shared by these threads. In other words, semaphores must be global variables of these threads. However, this is impossible if threads or processes are run across a network, because we have no efficient way of sharing variables across a network. One way to overcome this problem is through message passing.
With message-passing, threads share channels. A channel is a bi-directional communication link for threads to send messages to or receive messages from another thread. More precisely, before communication activities can take place, a channel must be established between two threads. A thread sends a message to a channel with the send primitive. Then, the data flows from the sender into the channel and is retrieved by the receiver. A receiver uses the receive primitive to retrieve the message from the channel. In this way, channels are the only objects that are shared by threads. This implies that variables will not be accessed concurrently by multiple threads, and, consequently, no mutual exclusion is required. Because threads/processes that communicate with message passing share no variables, they do not have to be run on processors that share a common memory. In particular, threads and processes can be run on different processors that do not share memory at all such as computer systems on a network. Because of this fact, programs that employ message passing are usually referred to as distributed programs.
The capacity of a channel is its buffer size. If a channel has no buffer, we will say its capacity is zero. If the capacity of a channel is zero, messages cannot be waiting in a channel. As a result, a sender must wait until a receiver retrieves the message, and the two involved threads are synchronized for a message transfer to occur. More precisely, if a sender sends a message before a receiver reaches the point to retrieve the message, the sender waits until the receiver retrieves the message. Similarly, if the receiver reaches the point of receiving a message before the sender sends one, the receiver waits until a message becomes available. In this way, communication and synchronization are tightly coupled. Therefore, channels with zero capacity are usually referred to as synchronous channels (i.e., blocking send and blocking receive), and the synchronization is a rendezvous. A telephone line is a good example of synchronous communication because the connection must be established before any conversation can take place.
If the capacity is non-zero, messages may wait in the buffer before they are retrieved by the receiver. In this case, if the buffer is full, a sender waits until the buffer becomes not full, and if the buffer is empty, a receiver waits until the buffer contains a message. This type of channels is referred to as asynchronous channels. A telephone with an answering machine is a good example of asynchronous communication. As long as the recording tape has space, a caller (i.e., sender) can call and leave a message. On the other hand, a receiver can retrieve the message from the recording tape, or waits until a new message comes.
The capacity of an asynchronous channel may be infinite. In this case, sender never waits. In ThreadMentor, as long as there is available, a sender can send. As a result, conceptually, ThreadMentor supports asynchronous channels with infinite capacity. Between capacity 0 (i.e., synchronous channels) and infinite capacity (i.e., asynchronous channels), message passing with channels that have a bounded capacity is usually referred to as buffered message passing.