问题描述
我实现了以下代码以对矩阵执行filter2D.在我编译程序后,它返回Segmentation Fault错误.在此程序中,我想在程序中分配一个输入数组(不想加载图像).然后执行一个在程序内部分配的过滤器,以在运行时测量不同矩阵和内核的filter2D函数的时间.
I have implemented the following code to perform filter2D on a matrix. After I compiled the program it returns Segmentation fault error. In this program, I want to assign an input array in the program (don't want to load images). then perform a filter which is assigned inside the program to measure the time of the filter2D function for different matrices and kernels at runtime.
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <stdlib.h>
#include <stdio.h>
#include <iostream>
using namespace std;
using namespace cv;
int main(){
//Picture size for input and output are the same
int Pxsize = 128;
int Pysize = Pxsize;
//Kernel size
int Kxsize = 3;
int Kysize = Kxsize;
//filter arguments
Point anchor;
double delta;
int ddepth;
//name for out put
char window_name[32] = "filter2D Demo";
Mat input[128][128];
Mat output[128][128];
Mat kernel[3][3];
// Initialize arguments for the filter
anchor = Point( -1, -1 );
delta = 0;
ddepth = -1;
int i,j;
//assign data between 0 and 255 to the input matrix
for (i=0; i<Pxsize; i++)
for (j=0; j<Pysize; j++)
input[i][j]=(i*j)%255;
//assign data to the kernel
//assign data between 0 and 255 to the input matrix
for (i=0; i<Kxsize; i++)
for (j=0; j<Kysize; j++)
kernel[i][j]=1;
//the problem is here:
filter2D((InputArray) input, (OutputArray) output, ddepth , ( InputArray) kernel, anchor, delta, BORDER_DEFAULT );
namedWindow( window_name, CV_WINDOW_AUTOSIZE );
imshow( window_name, (OutputArray) output );
return 0;
}
输出为:
Segmentation fault
------------------
(program exited with code: 139)
Press return to continue
我在Linux Mint中的 gcc 6.2.0
编译器中使用了此命令
I used this command for gcc 6.2.0
compiler in Linux mint
g++ -Wall -Wextra -Werror -pedantic -I/usr/local/include/opencv -I/usr/local/include/opencv2 -L/usr/local/lib/ -g -o "opencv" "opencv.cpp" -lopencv_core -lopencv_imgproc -lopencv_highgui -lopencv_ml -lopencv_video -lopencv_features2d -lopencv_calib3d -lopencv_objdetect -lopencv_contrib -lopencv_legacy -lopencv_stitching
预先感谢
推荐答案
根据评论,发现了问题!这是因为将 Mat
对象转换为 filter2D
函数中的 InputArray
和 OutputArray
.我将实现更改为以下程序,并且可以正常运行.
According to the comments, the problem has been found! It is because of casting the Mat
object to InputArray
and OutputArray
in filter2D
function. I changed the implementation to the following program and it works.
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <stdlib.h>
#include <stdio.h>
#include <iostream>
#include <time.h>
//Picture size for input and output are the same
#define MAX1 4096
#define Pxsize MAX1
#define Pysize Pxsize
//Kernel size
#define Kxsize 9
#define Kysize Kxsize
//number of iterations
#define NUM_LOOP 10000
using namespace std;
using namespace cv;
unsigned short int A[Pxsize][Pysize];
unsigned short int B[Pxsize][Pysize];
unsigned short int K[Kxsize][Kysize];
// measuring the time
double pTime = 0, mTime = 10; // to indicate how much program time is (pTime) and how much i want to wait (mTime)
struct timespec start;
int main(){
//singleCore
struct timespec tStart, tEnd;//used to record the processiing time
double tTotal , tBest=10000;//minimum of toltal time will asign to the best time
int w=0;
//filter arguments
Point anchor;
double delta;
int ddepth;
//assign data between 0 and 255 to the input matrix
int i,j;
for (i=0; i<Pxsize; i++)
for (j=0; j<Pysize; j++)
A[i][j] = (i+j)%255;
//assign data to the kernel
//assign data between 0 and 255 to the input matrix
for (i=0; i<Kxsize; i++)
for (j=0; j<Kysize; j++)
K[i][j]=1;
cv::Mat input = cv::Mat(Pxsize, Pysize, CV_8UC1, A);
cv::Mat kernel = cv::Mat::ones(Kxsize, Kysize, CV_8UC1);
//name for out put
// Initialize arguments for the filter
anchor = Point( -1, -1 );
delta = 0;
ddepth = -1;
do{// this loop repeat the body to record the best time
clock_gettime(CLOCK_MONOTONIC,&tStart);
//the problem is here:
filter2D( input, output, ddepth , kernel, anchor, delta, 1 );
clock_gettime(CLOCK_MONOTONIC,&tEnd);
tTotal = (tEnd.tv_sec - tStart.tv_sec);
tTotal += (tEnd.tv_nsec - tStart.tv_nsec) / 1000000000.0;
if(tTotal<tBest)
tBest=tTotal;
pTime += tTotal;
} while(w++ < NUM_LOOP && pTime < mTime);
//cout<<" "<< input<<endl;
//cout<<" "<< output<<endl;
printf("\nThe best time: %f sec in %d repetition for %dX%d matrix for %d, %d kernel \n",tBest,w, Pxsize, Pysize, Kxsize, Kysize);
return 0;
}
这篇关于为什么此代码返回细分错误错误?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!