this post was submitted on 13 Jun 2023
9 points (100.0% liked)

Programming

568 readers
1 users here now

All things programming and coding related. Subcommunity of Technology.


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Hi,

I'm studying for an exam coming up soon and I'm trying to figure out what is the functional difference between these two approaches:

#pragma omp parallel for
	for(unsigned int i = 0; i < G.Out.bucket_count(); i++){
		for(auto itv = G.Out.begin(i); itv != G.Out.end(i); itv++){

			unsigned int v = itv->first;

			#pragma omp parallel for
			for(unsigned int j = 0; j < G.Out[v]._map.bucket_count(); j++){
				for(auto ite = G.Out[v]._map.begin(j); ite != G.Out[v]._map.end(j); ite++){

					unsigned int w = ite->first;

					if(o[v] > o[w])
					 #pragma omp critical
					 p[o[v]] = min(p[o[v]],o[w]);

				}
			}

		}

and

	#pragma omp parallel for
	for(unsigned int i = 0; i < G.Out.bucket_count(); i++){
		for(auto itv = G.Out.begin(i); itv != G.Out.end(i); itv++){

			unsigned int v = itv->first;

			if(p[v] == v){

				unsigned int parent = v; //=p[v]

				#pragma omp parallel for reduction(min : parent)
				for(unsigned int j = 0; j < G.Out[v]._map.bucket_count(); j++){
					for(auto ite = G.Out[v]._map.begin(j); ite != G.Out[v]._map.end(j); ite++){
						unsigned int w = ite->first;
						if(v > w)
						 parent = min(parent,w);
					}
				}

				p[v] = parent;
				if(p[v] != v) changes = 1;

			}

		}
	}

In the first example, the minimum between the two values is calculated using a critical section, while the second one uses a reduction. Both work, and they seem equivalent to me, when would one choose one or the other?

Thanks, and sorry if the question is too niche. Any other info about OpenMP is greatly appreciated :D

top 2 comments
sorted by: hot top controversial new old
[โ€“] [email protected] 3 points 1 year ago

The critical section makes sure that only ever a single thread can execute the section at a time. So when a thread what's to execute the section, it first needs to make sure no other thread is executing it and potentially wait for the other threads to finish executing the section.

Reductions however don't induce this synchronization overhead, instead each thread executes with an independent parent value, and after the loop is done, the reduction is applied to merge all parent values. The following, is essentially what the #pragma omp parallel for reduction(min : parent) is equivalent to:

unsigned int parents[8] = {v, v, v, v, v, v, v, v};

#pragma omp parallel for num_threads(8)
for(unsigned int j = 0; j < G.Out[v]._map.bucket_count(); j++) {
	for(auto ite = G.Out[v]._map.begin(j); ite != G.Out[v]._map.end(j); ite++) {
		unsigned int w = ite->first;
		if(v > w)
			parents[omp_get_thread_num()] = min(parents[omp_get_thread_num()],w);
	}
}

unsigned int parent = v;
for (unsigned int i = 0; i < 8; ++i) {
	parent = min(parent, parents[i]);
}
[โ€“] ShadowAether 1 points 1 year ago

Critical section uses a shared variable so any case you have multiple threads writing to a variable you should have a critical section, it's very general. I don't have much experience with reduction but it seems geared towards all loops preforming the same function as part of a larger function and they take approximately the same amount of time to complete plus are expected to start and end together. Something like parallelizing an integral by spliting it into ranges would be simpler with reduction. Also if the threads need to read and write to the global, seems like that would need a critical section.