我从Cormen的著名文本中实现了一个并行合并排序算法。我使用pthreads在C语言中编写了该代码,并在Win7 x64上使用MinGW进行了编译(稍后也在Ubuntu中的GCC上进行了测试,结果相同)。我在并行化方面的第一个方法很幼稚……我在每个递归级别上都产生了一个新线程(这实际上是Cormen的伪代码所暗示的)。但是,这通常会花费很长时间,或者由于段错误而崩溃(我可以假设系统可以处理多少个线程存在一定的硬性限制)。这似乎是递归并行化的一个新手常见错误,实际上我在此站点上发现了一个类似的DISCUSSION。因此,我改用了该线程中的建议,即设置问题大小的阈值,如果产生新线程的函数的设置小于阈值(例如10,000个元素),那么它将直接对这些元素进行操作,而不是直接对这些元素进行操作为这么小的集合创建一个新线程。
现在一切似乎都正常。我在下面列出了一些结果。 N是问题的大小(一组完全乱码的整数[1、2、3,...,N]),阈值是一个值,在该值以下,我的并行排序和并行合并函数拒绝生成新线程。第一个表显示以毫秒为单位的排序时间,第二个表显示在每种情况下产生了多少个排序/合并工作线程。查看底表中的N = 1E6和N = 1E7行,您可以看到,只要降低阈值,以便允许超过8000个合并工作程序,就会出现段错误。再次,我认为这是由于系统对线程的限制,我很乐意听到更多有关此的信息,但这不是我的主要问题。
主要问题是,当尝试使用较高的阈值时,为什么最后一行会出现段错误,这会产生预期的15/33工作线程(遵循前几行的模式)。当然,这对于我的系统来说,没有太多线程可以处理。一个完成的实例(表的右下角单元)使用了约1.2GB RAM(我的系统具有6GB),与每行右侧具有0个线程的线程版本相比,线程版本似乎永远不会占用更多的RAM。
很明显,这与问题的大小和系统资源有关。我希望我缺少一些常识,这使得答案很明确。
有任何想法吗?很抱歉,一长串的文字...如果您已经读了这么长时间,谢谢!如果看起来相关,我可以发布源代码。
编辑:添加源以供引用:
#include <stdio.h>
#include <stdlib.h>
#include <sys/time.h>
#include <pthread.h>
const int N = 100000000;
const int SORT_THRESHOLD = 10000000;
const int MERGE_THRESHOLD = 10000000;
int sort_thread_count = 0;
int merge_thread_count = 0;
typedef struct s_pmergesort_args {
int *vals_in, p, r, *vals_out, s;
} pmergesort_args;
typedef struct s_pmerge_args {
int *temp, p1, r1, p2, r2, *vals_out, p3;
} pmerge_args;
void *p_merge_sort(void *v_pmsa);
void *p_merge(void *v_pma);
int binary_search(int val, int *temp, int p, int r);
int main() {
int *values, i, rand1, rand2, temp, *sorted;
long long rand1a, rand1b, rand2a, rand2b;
struct timeval start, end;
/* allocate values on heap and initialize */
values = malloc(N * sizeof(int));
sorted = malloc(N * sizeof(int));
for (i = 0; i < N; i++) {
values[i] = i + 1;
sorted[i] = 0;
}
/* scramble
* - complicated logic to maximize swapping
* - lots of testing (not shown) was done to verify optimal swapping */
srand(time(NULL));
for (i = 0; i < N/10; i++) {
rand1a = (long long)(N*((double)rand()/(1+(double)RAND_MAX)));
rand1b = (long long)(N*((double)rand()/(1+(double)RAND_MAX)));
rand1 = (int)((rand1a * rand1b + rand()) % N);
rand2a = (long long)(N*((double)rand()/(1+(double)RAND_MAX)));
rand2b = (long long)(N*((double)rand()/(1+(double)RAND_MAX)));
rand2 = (int)((rand2a * rand2b + rand()) % N);
temp = values[rand1];
values[rand1] = values[rand2];
values[rand2] = temp;
}
/* set up args for p_merge_sort */
pmergesort_args pmsa;
pmsa.vals_in = values;
pmsa.p = 0;
pmsa.r = N-1;
pmsa.vals_out = sorted;
pmsa.s = 0;
/* sort */
gettimeofday(&start, NULL);
p_merge_sort(&pmsa);
gettimeofday(&end, NULL);
/* verify sorting */
for (i = 1; i < N; i++) {
if (sorted[i] < sorted[i-1]) {
fprintf(stderr, "Error: array is not sorted.\n");
exit(0);
}
}
printf("Success: array is sorted.\n");
printf("Sorting took %dms.\n", (int)(((end.tv_sec * 1000000 + end.tv_usec) - (start.tv_sec * 1000000 + start.tv_usec))/1000));
free(values);
free(sorted);
printf("( sort threads created: %d )\n", sort_thread_count);
printf("( merge threads created: %d )\n", merge_thread_count);
return 0;
}
void *p_merge_sort(void *v_pmsa) {
pmergesort_args pmsa = *((pmergesort_args *) v_pmsa);
int *vals_in = pmsa.vals_in;
int p = pmsa.p;
int r = pmsa.r;
int *vals_out = pmsa.vals_out;
int s = pmsa.s;
int n = r - p + 1;
pthread_t worker;
if (n > SORT_THRESHOLD) {
sort_thread_count++;
}
if (n == 1) {
vals_out[s] = vals_in[p];
} else {
int *temp = malloc(n * sizeof(int));
int q = (p + r) / 2;
int q_ = q - p + 1;
pmergesort_args pmsa_l;
pmsa_l.vals_in = vals_in;
pmsa_l.p = p;
pmsa_l.r = q;
pmsa_l.vals_out = temp;
pmsa_l.s = 0;
pmergesort_args pmsa_r;
pmsa_r.vals_in = vals_in;
pmsa_r.p = q+1;
pmsa_r.r = r;
pmsa_r.vals_out = temp;
pmsa_r.s = q_;
if (n > SORT_THRESHOLD) {
pthread_create(&worker, NULL, p_merge_sort, &pmsa_l);
} else {
p_merge_sort(&pmsa_l);
}
p_merge_sort(&pmsa_r);
if (n > SORT_THRESHOLD) {
pthread_join(worker, NULL);
}
pmerge_args pma;
pma.temp = temp;
pma.p1 = 0;
pma.r1 = q_ - 1;
pma.p2 = q_;
pma.r2 = n - 1;
pma.vals_out = vals_out;
pma.p3 = s;
p_merge(&pma);
free(temp);
}
}
void *p_merge(void *v_pma) {
pmerge_args pma = *((pmerge_args *) v_pma);
int *temp = pma.temp;
int p1 = pma.p1;
int r1 = pma.r1;
int p2 = pma.p2;
int r2 = pma.r2;
int *vals_out = pma.vals_out;
int p3 = pma.p3;
int n1 = r1 - p1 + 1;
int n2 = r2 - p2 + 1;
int q1, q2, q3, t;
pthread_t worker;
if (n1 < n2) {
t = p1; p1 = p2; p2 = t;
t = r1; r1 = r2; r2 = t;
t = n1; n1 = n2; n2 = t;
}
if (n1 > MERGE_THRESHOLD) {
merge_thread_count++;
}
if (n1 == 0) {
return;
} else {
q1 = (p1 + r1) / 2;
q2 = binary_search(temp[q1], temp, p2, r2);
q3 = p3 + (q1 - p1) + (q2 - p2);
vals_out[q3] = temp[q1];
pmerge_args pma_l;
pma_l.temp = temp;
pma_l.p1 = p1;
pma_l.r1 = q1-1;
pma_l.p2 = p2;
pma_l.r2 = q2-1;
pma_l.vals_out = vals_out;
pma_l.p3 = p3;
if (n1 > MERGE_THRESHOLD) {
pthread_create(&worker, NULL, p_merge, &pma_l);
} else {
p_merge(&pma_l);
}
pmerge_args pma_r;
pma_r.temp = temp;
pma_r.p1 = q1+1;
pma_r.r1 = r1;
pma_r.p2 = q2;
pma_r.r2 = r2;
pma_r.vals_out = vals_out;
pma_r.p3 = q3+1;
p_merge(&pma_r);
if (n1 > MERGE_THRESHOLD) {
pthread_join(worker, NULL);
}
}
}
int binary_search(int val, int *temp, int p, int r) {
int low = p;
int mid;
int high = (p > r+1)? p : r+1;
while (low < high) {
mid = (low + high) / 2;
if (val <= temp[mid]) {
high = mid;
} else {
low = mid + 1;
}
}
return high;
}
编辑2:在下面添加了新图像,显示了每个版本使用的“最大”和“总” RAM(最大值表示最高同时分配/使用量,总数表示在程序生命周期内所有分配请求的总和)。这些建议在N = 1E8和threshold = 1E7的情况下,我应该获得最大3.2GB的使用量(我的系统应该能够支持)。但是,再次……我想这与pthread库中的其他一些限制有关……与我的实际系统资源无关。
最佳答案
看起来它的内存不足了。在您的示例中,如果代码按顺序运行,则一次分配的最大内存为1.6GB。使用线程时,它使用的内存超过3GB。我在malloc/free函数周围放了一些包装,并得到了以下结果:
Allocation of 12500000 bytes failed with 3074995884 bytes already allocated.
很容易看出,线程化后的内存使用量会更多。在那种情况下,它将同时对整个数组的左侧和右侧进行排序,并分配两个大的临时缓冲区来执行此操作。当按顺序运行时,左半部分的临时缓冲区将在对右半部分进行排序之前释放。
这是我使用的包装器:
static size_t total_allocated = 0;
static size_t max_allocated = 0;
static pthread_mutex_t total_allocated_mutex;
static void *allocate(int n)
{
void *result = 0;
pthread_mutex_lock(&total_allocated_mutex);
result = malloc(n);
if (!result) {
fprintf(stderr,"Allocation of %d bytes failed with %u bytes already allocated\n",n,total_allocated);
}
assert(result);
total_allocated += n;
if (total_allocated>max_allocated) {
max_allocated = total_allocated;
}
pthread_mutex_unlock(&total_allocated_mutex);
return result;
}
static void *deallocate(void *p,int n)
{
pthread_mutex_lock(&total_allocated_mutex);
total_allocated -= n;
free(p);
pthread_mutex_unlock(&total_allocated_mutex);
}
关于c - Pthreads无法解释的段错误,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/11380556/