OpenMP topic: Synchronization

Experimental html version of downloadable textbook, see http://www.tacc.utexas.edu/~eijkhout/istc/istc.html
22.1.1 : Implicit barriers
22.2 : Mutual exclusion
22.2.1 : critical and atomic
22.3 : Locks
22.3.1 : Nested locks
22.4 : Example: Fibonacci computation
Back to Table of Contents

22 OpenMP topic: Synchronization

In the constructs for declaring parallel regions above, you had little control over in what order threads executed the work they were assigned. This section will discuss synchronization constructs: ways of telling threads to bring a certain order to the sequence in which they do things.

• critical : a section of code can only be executed by one thread at a time; see~ 22.2.1 .
• atomic Atomic update of a single memory location. Only certain specified syntax pattterns are supported. This was added in order to be able to use hardware support for atomic updates.
• barrier : section~ 22.1 .
• ordered : section~ 18.6 .
• locks: section 22.3 .
• flush : section 25.3 .
• nowait : section~ 18.7 .

22.1 Barrier

crumb trail: > omp-sync > Barrier

A barrier defines a point in the code where all active threads will stop until all threads have arrived at that point. With this, you can guarantee that certain calculations are finished. For instance, in this code snippet, computation of~ y can not proceed until another thread has computed its value of~ x .

#pragma omp parallel
{
int mytid = omp_get_thread_num();
x[mytid] = some_calculation();
y[mytid] = x[mytid]+x[mytid+1];
}


This can be guaranteed with a barrier pragma:

#pragma omp parallel
{
int mytid = omp_get_thread_num();
x[mytid] = some_calculation();
#pragma omp barrier
y[mytid] = x[mytid]+x[mytid+1];
}


Apart from the barrier directive, which inserts an explicit barrier, OpenMP has implicit barriers after a load sharing construct. Thus the following code is well defined:

#pragma omp parallel
{
#pragma omp for
for (int mytid=0; mytid<number_of_threads; mytid++)
x[mytid] = some_calculation();
#pragma omp for
for (int mytid=0; mytid<number_of_threads-1; mytid++)
y[mytid] = x[mytid]+x[mytid+1];
}


You can also put each parallel loop in a parallel region of its own, but there is some overhead associated with creating and deleting the team of threads in between the regions.

22.1.1 Implicit barriers

crumb trail: > omp-sync > Barrier > Implicit barriers

At the end of a parallel region the team of threads is dissolved and only the master thread continues. Therefore, there is an implicit barrier at the end of a parallel region .

There is some barrier behaviour behaviour} associated with omp for loops and other worksharing constructs barriers at} (see section  19.3 ). For instance, there is an implicit barrier at the end of the loop. This barrier behaviour can be cancelled with the clause.

You will often see the idiom

#pragma omp parallel
{
#pragma omp for nowait
for (i=0; i<N; i++)
a[i] = // some expression
#pragma omp for
for (i=0; i<N; i++)
b[i] = ...... a[i] ......


Here the nowait clause implies that threads can start on the second loop while other threads are still working on the first. Since the two loops use the same schedule here, an iteration that uses a[i] can indeed rely on it that that value has been computed.

22.2 Mutual exclusion

crumb trail: > omp-sync > Mutual exclusion

Sometimes it is necessary to let only one thread execute a piece of code. Such a piece of code is called a critical section , and OpenMP has several mechanisms for realizing this.

The most common use of critical sections is to update a variable. Since updating involves reading the old value, and writing back the new, this has the possibility for a race condition : another thread reads the current value before the first can update it; the second thread the updates to the wrong value.

Critical sections are an easy way to turn an existing code into a correct parallel code. However, there are disadvantages to this, and sometimes a more drastic rewrite is called for.

22.2.1critical and atomic

crumb trail: > omp-sync > Mutual exclusion > critical and atomic

There are two pragmas for critical sections: critical and atomic . Both denote atomic operations in a technical sense. The first one is general and can contain an arbitrary sequence of instructions; the second one is more limited but has performance advantages.

The typical application of a critical section is to update a variable:

#pragma omp parallel
{
int mytid = omp_get_thread_num();
double tmp = some_function(mytid);
#pragma omp critical
sum += tmp;
}


Exercise

Consider a loop where each iteration updates a variable.

#pragma omp parallel for shared(result)
for ( i ) {
result += some_function_of(i);
}


Discuss qualitatively the difference between:

• turning the update statement into a critical section, versus
• letting the threads accumulate into a private variable tmp as above, and summing these after the loop.

Do an Ahmdal-style quantitative analysis of the first case, assuming that you do $n$ iterations on $p$ threads, and each iteration has a critical section that takes a fraction $f$. Assume the number of iterations $n$ is a multiple of the number of threads $p$. Also assume the default static distribution of loop iterations over the threads.

A critical section works by acquiring a lock, which carries a substantial overhead. Furthermore, if your code has multiple critical sections, they are all mutually exclusive: if a thread is in one critical section, the other ones are all blocked.

On the other hand, the syntax for atomic sections is limited to the update of a single memory location, but such sections are not exclusive and they can be more efficient, since they assume that there is a hardware mechanism for making them critical.

The problem with critical sections being mutually exclusive can be mitigated by naming them:

#pragma omp critical (optional_name_in_parens)


22.3 Locks

crumb trail: > omp-sync > Locks

OpenMP also has the traditional mechanism of a lock . A lock is somewhat similar to a critical section: it guarantees that some instructions can only be performed by one process at a time. However, a critical section is indeed about code; a lock is about data. With a lock you make sure that some data elements can only be touched by one process at a time.

One simple example of the use of locks is generation of a histogram . A histogram consists of a number of bins, that get updated depending on some data. Here is the basic structure of such a code:

int count[100];
float x = some_function();
int ix = (int)x;
if (ix>=100)
error();
else
count[ix]++;


It would be possible to guard the last line:

#pragma omp critical
count[ix]++;


but that is unnecessarily restrictive. If there are enough bins in the histogram, and if the some_function takes enough time, there are unlikely to be conflicting writes. The solution then is to create an array of locks, with one lock for each count location.

Create/destroy:

void omp_init_lock(omp_lock_t *lock);
void omp_destroy_lock(omp_lock_t *lock);


Set and release:

void omp_set_lock(omp_lock_t *lock);
void omp_unset_lock(omp_lock_t *lock);


Since the set call is blocking, there is also

omp_test_lock();


Unsetting a lock needs to be done by the thread that set it.

Lock operations implicitly have a flush .

Exercise

In the following code, one process sets array A and then uses it to update B; the other process sets array B and then uses it to update A. Argue that this code can deadlock. How could you fix this?

#pragma omp parallel shared(a, b, nthreads, locka, lockb)
#pragma omp sections nowait
{
#pragma omp section
{
omp_set_lock(&locka);
for (i=0; i<N; i++)
a[i] = ..

omp_set_lock(&lockb);
for (i=0; i<N; i++)
b[i] = .. a[i] ..
omp_unset_lock(&lockb);
omp_unset_lock(&locka);
}

#pragma omp section
{
omp_set_lock(&lockb);
for (i=0; i<N; i++)
b[i] = ...

omp_set_lock(&locka);
for (i=0; i<N; i++)
a[i] = .. b[i] ..
omp_unset_lock(&locka);
omp_unset_lock(&lockb);
}
}  /* end of sections */
}  /* end of parallel region */


22.3.1 Nested locks

crumb trail: > omp-sync > Locks > Nested locks

A lock as explained above can not be locked if it is already locked. A  nested lock can be locked multiple times by the same thread before being unlocked.

• omp_init_nest_lock
• omp_destroy_nest_lock
• omp_set_nest_lock
• omp_unset_nest_lock
• omp_test_nest_lock

lock|)

22.4 Example: Fibonacci computation

crumb trail: > omp-sync > Example: Fibonacci computation

The Fibonacci sequence is recursively defined as $F(0)=1,\qquad F(1)=1,\qquad F(n)=F(n-1)+F(n-2) \hbox{ for n\geq2}.$ We start by sketching the basic single-threaded solution. The naive code looks like:

int main() {
value = new int[nmax+1];
value[0] = 1;
value[1] = 1;
fib(10);
}

int fib(int n) {
int i, j, result;
if (n>=2) {
i=fib(n-1); j=fib(n-2);
value[n] = i+j;
}
return value[n];
}


Howver, this is inefficienty, since most intermediate values will be computed more than once. We solve this by keeping track of which results are known:

  ...
done = new int[nmax+1];
for (i=0; i<=nmax; i++)
done[i] = 0;
done[0] = 1;
done[1] = 1;
...
int fib(int n) {
int i, j;
if (!done[n]) {
i = fib(n-1); j = fib(n-2);
value[n] = i+j; done[n] = 1;
}
return value[n];
}


The OpenMP parallel solution calls for two different ideas. First of all, we parallelize the recursion by using tasks (section  22.4 :

int fib(int n) {
int i, j;
if (n>=2) {
#pragma omp task shared(i) firstprivate(n)
i=fib(n-1);
#pragma omp task shared(j) firstprivate(n)
j=fib(n-2);
#pragma omp taskwait
value[n] = i+j;
}
return value[n];
}


This computes the right solution, but, as in the naive single-threaded solution, it recomputes many of the intermediate values.

A naive addition of the done array leads to data races, and probably an incorrect solution:

int fib(int n) {
int i, j, result;
if (!done[n]) {
#pragma omp task shared(i) firstprivate(n)
i=fib(n-1);
#pragma omp task shared(i) firstprivate(n)
j=fib(n-2);
#pragma omp taskwait
value[n] = i+j;
done[n] = 1;
}
return value[n];
}


For instance, there is no guarantee that the done array is updated later than the value array, so a thread can think that done[n-1] is true, but value[n-1] does not have the right value yet.

One solution to this problem is to use a lock, and make sure that, for a given index  n , the values done[n] and value[n] are never touched by more than one thread at a time:

int fib(int n)
{
int i, j;
omp_set_lock( &(dolock[n]) );
if (!done[n]) {
#pragma omp task shared(i) firstprivate(n)
i = fib(n-1);
#pragma omp task shared(j) firstprivate(n)
j = fib(n-2);
#pragma omp taskwait
value[n] = i+j;
done[n] = 1;
}
omp_unset_lock( &(dolock[n]) );
return value[n];
}


This solution is correct, optimally efficient in the sense that it does not recompute anything, and it uses tasks to obtain a parallel execution.

However, the efficiency of this solution is only up to a constant. A lock is still being set, even if a value is already computed and therefore will only be read. This can be solved with a complicated use of critical sections, but we will forego this.

Back to Table of Contents