locked
Method's type signature is not PInvoke compatible c# RRS feed

  • Question

  • Hi, I am calling a c++ function from a C# project by using a dll where it returns an IplImage. 

    I have used this to export the function:

    extern "C" __declspec(dllexport)IplImage * showNewImage(IplImage * input){...}

    and this in C#

    [DllImport("dllFile")]
    public static extern MIplImage showNewImage(MIplImage input);

    Is the way that I have used the dll import wrong? the function is supposed to return back an image, the c++ function is and dll both are working properly but I need help calling it from my C# project using the dll. 

    Please Help.

    Tuesday, October 16, 2012 2:57 AM

Answers

  • I'm using OpenCV 2.4.2 which has provided libraries for 64 bit, in the C++ project also I have used the platform as 64 bit and used the proper libraries and made all the dependencies, it gives no errors.

    Also in my C# project I'm using the EmguCV 2.4.2 64 bit, the OpenCV wrapper for C#

    It allows me to compile it
    • Edited by nirmal.91 Tuesday, October 16, 2012 1:57 PM
    • Marked as answer by Mike Feng Wednesday, October 17, 2012 6:19 AM
    Tuesday, October 16, 2012 1:48 PM

All replies

  • If the C++ type IplImage is a C++ class, then the function cannot be consumed by .net.

    If the C++ type IplImage is a POCO struct, then you need to declare such structure in C# in order to obtain its data, but note that the function returns a pointer.  This is indication of a bad programming practice, although I cannot assure this unless I see the function's code.

    If the C++ function showNewImage() is allocating memory for the return value, then yes, it is bad programming practice and a source of memory leaks.  To continue this subject, please provide the details of IplImage, MIplImage and showNewImage().


    Jose R. MCP
    Code Samples

    Tuesday, October 16, 2012 3:03 AM
  • void strokeWidthTransform (IplImage * edgeImage,
                               IplImage * gradientX,
                               IplImage * gradientY,
                               bool dark_on_light,
                               IplImage * SWTImage,
                               std::vector<Ray> & rays) {
        // First pass
        float prec = .05;
        for( int row = 0; row < edgeImage->height; row++ ){
            const uchar* ptr = (const uchar*)(edgeImage->imageData + row * edgeImage->widthStep);
            for ( int col = 0; col < edgeImage->width; col++ ){
                if (*ptr > 0) {
                    Ray r;
    
                    Point2d p;
                    p.x = col;
                    p.y = row;
                    r.p = p;
                    std::vector<Point2d> points;
                    points.push_back(p);
    
                    float curX = (float)col + 0.5;
                    float curY = (float)row + 0.5;
                    int curPixX = col;
                    int curPixY = row;
                    float G_x = CV_IMAGE_ELEM ( gradientX, float, row, col);
                    float G_y = CV_IMAGE_ELEM ( gradientY, float, row, col);
                    // normalize gradient
                    float mag = sqrt( (G_x * G_x) + (G_y * G_y) );
                    if (dark_on_light){
                        G_x = -G_x/mag;
                        G_y = -G_y/mag;
                    } else {
                        G_x = G_x/mag;
                        G_y = G_y/mag;
    
                    }
                    while (true) {
                        curX += G_x*prec;
                        curY += G_y*prec;
                        if ((int)(floor(curX)) != curPixX || (int)(floor(curY)) != curPixY) {
                            curPixX = (int)(floor(curX));
                            curPixY = (int)(floor(curY));
                            // check if pixel is outside boundary of image
                            if (curPixX < 0 || (curPixX >= SWTImage->width) || curPixY < 0 || (curPixY >= SWTImage->height)) {
                                break;
                            }
                            Point2d pnew;
                            pnew.x = curPixX;
                            pnew.y = curPixY;
                            points.push_back(pnew);
    
                            if (CV_IMAGE_ELEM ( edgeImage, uchar, curPixY, curPixX) > 0) {
                                r.q = pnew;
                                // dot product
                                float G_xt = CV_IMAGE_ELEM(gradientX,float,curPixY,curPixX);
                                float G_yt = CV_IMAGE_ELEM(gradientY,float,curPixY,curPixX);
                                mag = sqrt( (G_xt * G_xt) + (G_yt * G_yt) );
                                if (dark_on_light){
                                    G_xt = -G_xt/mag;
                                    G_yt = -G_yt/mag;
                                } else {
                                    G_xt = G_xt/mag;
                                    G_yt = G_yt/mag;
    
                                }
    
                                if (acos(G_x * -G_xt + G_y * -G_yt) < PI/2.0 ) {
                                    float length = sqrt( ((float)r.q.x - (float)r.p.x)*((float)r.q.x - (float)r.p.x) + ((float)r.q.y - (float)r.p.y)*((float)r.q.y - (float)r.p.y));
                                    for (std::vector<Point2d>::iterator pit = points.begin(); pit != points.end(); pit++) {
                                        if (CV_IMAGE_ELEM(SWTImage, float, pit->y, pit->x) < 0) {
                                            CV_IMAGE_ELEM(SWTImage, float, pit->y, pit->x) = length;
                                        } else {
                                            CV_IMAGE_ELEM(SWTImage, float, pit->y, pit->x) = std::min(length, CV_IMAGE_ELEM(SWTImage, float, pit->y, pit->x));
                                        }
                                    }
                                    r.points = points;
                                    rays.push_back(r);
                                }
                                break;
                            }
                        }
                    }
                }
                ptr++;
            }
        }
    
    }

    This is the original function in C++, this is a text detection function that uses stroke width transform. The above function is called in the following text detection function:

    IplImage * textDetection (IplImage * input, bool dark_on_light)
    {
        assert ( input->depth == IPL_DEPTH_8U );
        assert ( input->nChannels == 3 );
        std::cout << "Running textDetection with dark_on_light " << dark_on_light << std::endl;
        // Convert to grayscale
        IplImage * grayImage =
                cvCreateImage ( cvGetSize ( input ), IPL_DEPTH_8U, 1 );
        cvCvtColor ( input, grayImage, CV_RGB2GRAY );
        // Create Canny Image
        double threshold_low = 175;
        double threshold_high = 320;
        IplImage * edgeImage =
                cvCreateImage( cvGetSize (input),IPL_DEPTH_8U, 1 );
        cvCanny(grayImage, edgeImage, threshold_low, threshold_high, 3) ;
        cvSaveImage ( "canny.png", edgeImage);
    
        // Create gradient X, gradient Y
        IplImage * gaussianImage =
                cvCreateImage ( cvGetSize(input), IPL_DEPTH_32F, 1);
        cvConvertScale (grayImage, gaussianImage, 1./255., 0);
        cvSmooth( gaussianImage, gaussianImage, CV_GAUSSIAN, 5, 5);
        IplImage * gradientX =
                cvCreateImage ( cvGetSize ( input ), IPL_DEPTH_32F, 1 );
        IplImage * gradientY =
                cvCreateImage ( cvGetSize ( input ), IPL_DEPTH_32F, 1 );
        cvSobel(gaussianImage, gradientX , 1, 0, CV_SCHARR);
        cvSobel(gaussianImage, gradientY , 0, 1, CV_SCHARR);
        cvSmooth(gradientX, gradientX, 3, 3);
        cvSmooth(gradientY, gradientY, 3, 3);
        cvReleaseImage ( &gaussianImage );
        cvReleaseImage ( &grayImage );
    
        // Calculate SWT and return ray vectors
        std::vector<Ray> rays;
        IplImage * SWTImage =
                cvCreateImage ( cvGetSize ( input ), IPL_DEPTH_32F, 1 );
        for( int row = 0; row < input->height; row++ ){
            float* ptr = (float*)(SWTImage->imageData + row * SWTImage->widthStep);
            for ( int col = 0; col < input->width; col++ ){
                *ptr++ = -1;
            }
        }
        strokeWidthTransform ( edgeImage, gradientX, gradientY, dark_on_light, SWTImage, rays );
        SWTMedianFilter ( SWTImage, rays );
    
        IplImage * output2 =
                cvCreateImage ( cvGetSize ( input ), IPL_DEPTH_32F, 1 );
        normalizeImage (SWTImage, output2);
        IplImage * saveSWT =
                cvCreateImage ( cvGetSize ( input ), IPL_DEPTH_8U, 1 );
        cvConvertScale(output2, saveSWT, 255, 0);
        cvSaveImage ( "SWT.png", saveSWT);
        cvReleaseImage ( &output2 );
        cvReleaseImage( &saveSWT );
    
        // Calculate legally connect components from SWT and gradient image.
        // return type is a vector of vectors, where each outer vector is a component and
        // the inner vector contains the (y,x) of each pixel in that component.
        std::vector<std::vector<Point2d> > components = findLegallyConnectedComponents(SWTImage, rays);
    
        // Filter the components
        std::vector<std::vector<Point2d> > validComponents;
        std::vector<std::pair<Point2d,Point2d> > compBB;
        std::vector<Point2dFloat> compCenters;
        std::vector<float> compMedians;
        std::vector<Point2d> compDimensions;
        filterComponents(SWTImage, components, validComponents, compCenters, compMedians, compDimensions, compBB );
    
        IplImage * output3 =
                cvCreateImage ( cvGetSize ( input ), 8U, 3 );
        renderComponentsWithBoxes (SWTImage, validComponents, compBB, output3);
        cvSaveImage ( "components.png",output3);
        //cvReleaseImage ( &output3 );
    
        // Make chains of components
        std::vector<Chain> chains;
        chains = makeChains(input, validComponents, compCenters, compMedians, compDimensions, compBB);
    
        IplImage * output4 =
                cvCreateImage ( cvGetSize ( input ), IPL_DEPTH_8U, 1 );
        renderChains ( SWTImage, validComponents, chains, output4 );
        //cvSaveImage ( "text.png", output4);
    
        IplImage * output5 =
                cvCreateImage ( cvGetSize ( input ), IPL_DEPTH_8U, 3 );
        cvCvtColor (output4, output5, CV_GRAY2RGB);
        cvReleaseImage ( &output4 );
    
        /*IplImage * output =
                cvCreateImage ( cvGetSize ( input ), IPL_DEPTH_8U, 3 );
        renderChainsWithBoxes ( SWTImage, validComponents, chains, compBB, output); */
        cvReleaseImage ( &gradientX );
        cvReleaseImage ( &gradientY );
        cvReleaseImage ( &SWTImage );
        cvReleaseImage ( &edgeImage );
        return output5;
    }

    But I only want to call the strokeWidthTransform function as I have already performed all other steps necessary before applying the strokeWidthTransform function.

    I am not that familiar with C++, but I modified the strokeWidthTransform in this way to return an IplImage because with original strokeWidthTransform function is a void function and it doesn't return any image to C#:

    IplImage * strokeWidthTransform (IplImage * edgeImage,
                               IplImage * gradientX,
                               IplImage * gradientY,
                               bool dark_on_light) {
        // First pass
    	std::vector<Ray> rays;
    	IplImage * SWTImage2 = cvCreateImage ( cvGetSize ( edgeImage ), IPL_DEPTH_32F, 1 );
        for( int row = 0; row < edgeImage->height; row++ ){
            float* ptr = (float*)(SWTImage2->imageData + row * SWTImage2->widthStep);
            for ( int col = 0; col < edgeImage->width; col++ ){
                *ptr++ = -1;
            }
        }
        float prec = .05;
        for( int row = 0; row < edgeImage->height; row++ ){
            const uchar* ptr = (const uchar*)(edgeImage->imageData + row * edgeImage->widthStep);
            for ( int col = 0; col < edgeImage->width; col++ ){
                if (*ptr > 0) {
                    Ray r;
    
                    Point2d p;
                    p.x = col;
                    p.y = row;
                    r.p = p;
                    std::vector<Point2d> points;
                    points.push_back(p);
    
                    float curX = (float)col + 0.5;
                    float curY = (float)row + 0.5;
                    int curPixX = col;
                    int curPixY = row;
                    float G_x = CV_IMAGE_ELEM ( gradientX, float, row, col);
                    float G_y = CV_IMAGE_ELEM ( gradientY, float, row, col);
                    // normalize gradient
                    float mag = sqrt( (G_x * G_x) + (G_y * G_y) );
                    if (dark_on_light){
                        G_x = -G_x/mag;
                        G_y = -G_y/mag;
                    } else {
                        G_x = G_x/mag;
                        G_y = G_y/mag;
    
                    }
                    while (true) {
                        curX += G_x*prec;
                        curY += G_y*prec;
                        if ((int)(floor(curX)) != curPixX || (int)(floor(curY)) != curPixY) {
                            curPixX = (int)(floor(curX));
                            curPixY = (int)(floor(curY));
                            // check if pixel is outside boundary of image
                            if (curPixX < 0 || (curPixX >= SWTImage2->width) || curPixY < 0 || (curPixY >= SWTImage2->height)) {
                                break;
                            }
                            Point2d pnew;
                            pnew.x = curPixX;
                            pnew.y = curPixY;
                            points.push_back(pnew);
    
                            if (CV_IMAGE_ELEM ( edgeImage, uchar, curPixY, curPixX) > 0) {
                                r.q = pnew;
                                // dot product
                                float G_xt = CV_IMAGE_ELEM(gradientX,float,curPixY,curPixX);
                                float G_yt = CV_IMAGE_ELEM(gradientY,float,curPixY,curPixX);
                                mag = sqrt( (G_xt * G_xt) + (G_yt * G_yt) );
                                if (dark_on_light){
                                    G_xt = -G_xt/mag;
                                    G_yt = -G_yt/mag;
                                } else {
                                    G_xt = G_xt/mag;
                                    G_yt = G_yt/mag;
    
                                }
    
                                if (acos(G_x * -G_xt + G_y * -G_yt) < PI/2.0 ) {
                                    float length = sqrt( ((float)r.q.x - (float)r.p.x)*((float)r.q.x - (float)r.p.x) + ((float)r.q.y - (float)r.p.y)*((float)r.q.y - (float)r.p.y));
                                    for (std::vector<Point2d>::iterator pit = points.begin(); pit != points.end(); pit++) {
                                        if (CV_IMAGE_ELEM(SWTImage2, float, pit->y, pit->x) < 0) {
                                            CV_IMAGE_ELEM(SWTImage2, float, pit->y, pit->x) = length;
                                        } else {
                                            CV_IMAGE_ELEM(SWTImage2, float, pit->y, pit->x) = std::min(length, CV_IMAGE_ELEM(SWTImage2, float, pit->y, pit->x));
                                        }
                                    }
                                    r.points = points;
                                    rays.push_back(r);
                                }
                                break;
                            }
                        }
                    }
                }
                ptr++;
            }
        }
    	return SWTImage2;
    }

    I have used this in C# to call dll

    [DllImport("dll")] unsafe public static extern MIplImage swt(MIplImage edgeImage, MIplImage gX, MIplImage gY, bool dark_on_light, MIplImage swtImage);

    Thank you for the reply, your help is most appreciated. 

    Tuesday, October 16, 2012 3:26 AM
  • Sorry, but that is still incomplete information.  I still don't know the declaration in C++ of the IplImage data type.  I also don't see the C# declaration of the MIplImage data type.  I need to see both to determine if it is even possible to marshal such return type.

    But what I can gather from the C++ code of the strokeWidthTransform() function (which by the way is NOT the one in the original question) is that this IplImage data type is a data type provided by the OpenCV library.  I looked up IplImage and found it to be a simple struct, or so it seems (http://opencv.willowgarage.com/documentation/basic_structures.html).  So show me the C# counterpart to make sure it matches.

    Regarding memory leaks, your DLL must provide an entry point to delete the returned structure.  Provide a new function that receives a pointer to the IplImage and then call cvReleaseImage() on this pointer.  This will ensure there are no memory leaks.


    Jose R. MCP
    Code Samples

    Tuesday, October 16, 2012 4:24 AM
  • The C# declaration that I have used is this:

    [DllImport("dll")]
    unsafe public static extern MIplImage swt(MIplImage edgeImage, MIplImage gX, MIplImage gY, bool dark_on_light, MIplImage swtImage);

    I modified the original c++ function 

    void strokeWidthTransform (IplImage * edgeImage,
                               IplImage * gradientX,
                               IplImage * gradientY,
                               bool dark_on_light,
                               IplImage * SWTImage,
                               std::vector<Ray> & rays) {
        // First pass
        float prec = .05;
        for( int row = 0; row < edgeImage->height; row++ ){
            const uchar* ptr = (const uchar*)(edgeImage->imageData + row * edgeImage->widthStep);
            for ( int col = 0; col < edgeImage->width; col++ ){
                if (*ptr > 0) {
                    Ray r;
    
                    Point2d p;
                    p.x = col;
                    p.y = row;
                    r.p = p;
                    std::vector<Point2d> points;
                    points.push_back(p);
    
                    float curX = (float)col + 0.5;
                    float curY = (float)row + 0.5;
                    int curPixX = col;
                    int curPixY = row;
                    float G_x = CV_IMAGE_ELEM ( gradientX, float, row, col);
                    float G_y = CV_IMAGE_ELEM ( gradientY, float, row, col);
                    // normalize gradient
                    float mag = sqrt( (G_x * G_x) + (G_y * G_y) );
                    if (dark_on_light){
                        G_x = -G_x/mag;
                        G_y = -G_y/mag;
                    } else {
                        G_x = G_x/mag;
                        G_y = G_y/mag;
    
                    }
                    while (true) {
                        curX += G_x*prec;
                        curY += G_y*prec;
                        if ((int)(floor(curX)) != curPixX || (int)(floor(curY)) != curPixY) {
                            curPixX = (int)(floor(curX));
                            curPixY = (int)(floor(curY));
                            // check if pixel is outside boundary of image
                            if (curPixX < 0 || (curPixX >= SWTImage->width) || curPixY < 0 || (curPixY >= SWTImage->height)) {
                                break;
                            }
                            Point2d pnew;
                            pnew.x = curPixX;
                            pnew.y = curPixY;
                            points.push_back(pnew);
    
                            if (CV_IMAGE_ELEM ( edgeImage, uchar, curPixY, curPixX) > 0) {
                                r.q = pnew;
                                // dot product
                                float G_xt = CV_IMAGE_ELEM(gradientX,float,curPixY,curPixX);
                                float G_yt = CV_IMAGE_ELEM(gradientY,float,curPixY,curPixX);
                                mag = sqrt( (G_xt * G_xt) + (G_yt * G_yt) );
                                if (dark_on_light){
                                    G_xt = -G_xt/mag;
                                    G_yt = -G_yt/mag;
                                } else {
                                    G_xt = G_xt/mag;
                                    G_yt = G_yt/mag;
    
                                }
    
                                if (acos(G_x * -G_xt + G_y * -G_yt) < PI/2.0 ) {
                                    float length = sqrt( ((float)r.q.x - (float)r.p.x)*((float)r.q.x - (float)r.p.x) + ((float)r.q.y - (float)r.p.y)*((float)r.q.y - (float)r.p.y));
                                    for (std::vector<Point2d>::iterator pit = points.begin(); pit != points.end(); pit++) {
                                        if (CV_IMAGE_ELEM(SWTImage, float, pit->y, pit->x) < 0) {
                                            CV_IMAGE_ELEM(SWTImage, float, pit->y, pit->x) = length;
                                        } else {
                                            CV_IMAGE_ELEM(SWTImage, float, pit->y, pit->x) = std::min(length, CV_IMAGE_ELEM(SWTImage, float, pit->y, pit->x));
                                        }
                                    }
                                    r.points = points;
                                    rays.push_back(r);
                                }
                                break;
                            }
                        }
                    }
                }
                ptr++;
            }
        }
    
    }

    to this because I wanted to return an Image:

    IplImage * strokeWidthTransform (IplImage * edgeImage,
                               IplImage * gradientX,
                               IplImage * gradientY,
                               bool dark_on_light) {
        // First pass
    	std::vector<Ray> rays;
    	IplImage * SWTImage2 = cvCreateImage ( cvGetSize ( edgeImage ), IPL_DEPTH_32F, 1 );
        for( int row = 0; row < edgeImage->height; row++ ){
            float* ptr = (float*)(SWTImage2->imageData + row * SWTImage2->widthStep);
            for ( int col = 0; col < edgeImage->width; col++ ){
                *ptr++ = -1;
            }
        }
        float prec = .05;
        for( int row = 0; row < edgeImage->height; row++ ){
            const uchar* ptr = (const uchar*)(edgeImage->imageData + row * edgeImage->widthStep);
            for ( int col = 0; col < edgeImage->width; col++ ){
                if (*ptr > 0) {
                    Ray r;
    
                    Point2d p;
                    p.x = col;
                    p.y = row;
                    r.p = p;
                    std::vector<Point2d> points;
                    points.push_back(p);
    
                    float curX = (float)col + 0.5;
                    float curY = (float)row + 0.5;
                    int curPixX = col;
                    int curPixY = row;
                    float G_x = CV_IMAGE_ELEM ( gradientX, float, row, col);
                    float G_y = CV_IMAGE_ELEM ( gradientY, float, row, col);
                    // normalize gradient
                    float mag = sqrt( (G_x * G_x) + (G_y * G_y) );
                    if (dark_on_light){
                        G_x = -G_x/mag;
                        G_y = -G_y/mag;
                    } else {
                        G_x = G_x/mag;
                        G_y = G_y/mag;
    
                    }
                    while (true) {
                        curX += G_x*prec;
                        curY += G_y*prec;
                        if ((int)(floor(curX)) != curPixX || (int)(floor(curY)) != curPixY) {
                            curPixX = (int)(floor(curX));
                            curPixY = (int)(floor(curY));
                            // check if pixel is outside boundary of image
                            if (curPixX < 0 || (curPixX >= SWTImage2->width) || curPixY < 0 || (curPixY >= SWTImage2->height)) {
                                break;
                            }
                            Point2d pnew;
                            pnew.x = curPixX;
                            pnew.y = curPixY;
                            points.push_back(pnew);
    
                            if (CV_IMAGE_ELEM ( edgeImage, uchar, curPixY, curPixX) > 0) {
                                r.q = pnew;
                                // dot product
                                float G_xt = CV_IMAGE_ELEM(gradientX,float,curPixY,curPixX);
                                float G_yt = CV_IMAGE_ELEM(gradientY,float,curPixY,curPixX);
                                mag = sqrt( (G_xt * G_xt) + (G_yt * G_yt) );
                                if (dark_on_light){
                                    G_xt = -G_xt/mag;
                                    G_yt = -G_yt/mag;
                                } else {
                                    G_xt = G_xt/mag;
                                    G_yt = G_yt/mag;
    
                                }
    
                                if (acos(G_x * -G_xt + G_y * -G_yt) < PI/2.0 ) {
                                    float length = sqrt( ((float)r.q.x - (float)r.p.x)*((float)r.q.x - (float)r.p.x) + ((float)r.q.y - (float)r.p.y)*((float)r.q.y - (float)r.p.y));
                                    for (std::vector<Point2d>::iterator pit = points.begin(); pit != points.end(); pit++) {
                                        if (CV_IMAGE_ELEM(SWTImage2, float, pit->y, pit->x) < 0) {
                                            CV_IMAGE_ELEM(SWTImage2, float, pit->y, pit->x) = length;
                                        } else {
                                            CV_IMAGE_ELEM(SWTImage2, float, pit->y, pit->x) = std::min(length, CV_IMAGE_ELEM(SWTImage2, float, pit->y, pit->x));
                                        }
                                    }
                                    r.points = points;
                                    rays.push_back(r);
                                }
                                break;
                            }
                        }
                    }
                }
                ptr++;
            }
        }
    	return SWTImage2;
    }

    and used the C# declaration shown above to call the modified C++ function

    Tuesday, October 16, 2012 4:34 AM
  • Once more:  I need to see the C# declaration of MIplImage.  What you have shown twice is the declaration of some function called swt that returns an MIplImage.  Those are two different things.

    Jose R. MCP
    Code Samples

    Tuesday, October 16, 2012 4:37 AM
  • I'm sorry I didn't clarify before, the c++ function was created into a dll in another project like this:

    #include "TextDetection.h"
    #include "TextDetection.cpp"
    
    extern "C" __declspec(dllexport)IplImage * swt(IplImage * edgeImage,IplImage * gradientX, IplImage * gradientY, bool dark_on_light,IplImage * SWTImage){
    	IplImage * output;
    	output = strokeWidthTransform(edgeImage, gradientX, gradientY, dark_on_light, SWTImage);
    	return output;
    }
    That's why the function is swt instead of strokeWidthTransform


    • Edited by nirmal.91 Tuesday, October 16, 2012 4:44 AM
    Tuesday, October 16, 2012 4:44 AM
  • No.  That's NOT what I need.  I need to see the definition of MIplImage.  Go to your C# project, then locate the word "MIplImage" anywhere in your project, then right-click it and select Go To Definition.  I need to see that definition.

    Jose R. MCP
    Code Samples

    Tuesday, October 16, 2012 4:49 AM
  •  public struct MIplImage
        {
            // Summary:
            //     Alignment of image rows (4 or 8).  OpenCV ignores it and uses widthStep instead
            public int align;
            //
            // Summary:
            //     ignored by OpenCV
            public int alphaChannel;
            //
            // Summary:
            //     ditto
            public int[] BorderConst;
            //
            // Summary:
            //     border completion mode, ignored by OpenCV
            public int[] BorderMode;
            //
            // Summary:
            //     ditto
            public byte[] channelSeq;
            //
            // Summary:
            //     ignored by OpenCV
            public byte[] colorModel;
            //
            // Summary:
            //     0 - interleaved color channels, 1 - separate color channels.  cvCreateImage
            //     can only create interleaved images
            public int dataOrder;
            //
            // Summary:
            //     pixel depth in bits: IPL_DEPTH_8U, IPL_DEPTH_8S, IPL_DEPTH_16U, IPL_DEPTH_16S,
            //     IPL_DEPTH_32S, IPL_DEPTH_32F and IPL_DEPTH_64F are supported
            public IPL_DEPTH depth;
            //
            // Summary:
            //     image height in pixels
            public int height;
            //
            // Summary:
            //     version (=0)
            public int ID;
            //
            // Summary:
            //     pointer to aligned image data
            public IntPtr imageData;
            //
            // Summary:
            //     pointer to a very origin of image data (not necessarily aligned) - it is
            //     needed for correct image deallocation
            public IntPtr imageDataOrigin;
            //
            // Summary:
            //     ditto
            public IntPtr imageId;
            //
            // Summary:
            //     image data size in bytes (=image->height*image->widthStep in case of interleaved
            //     data)
            public int imageSize;
            //
            // Summary:
            //     must be NULL in OpenCV
            public IntPtr maskROI;
            //
            // Summary:
            //     Most of OpenCV functions support 1,2,3 or 4 channels
            public int nChannels;
            //
            // Summary:
            //     sizeof(IplImage)
            public int nSize;
            //
            // Summary:
            //     0 - top-left origin, 1 - bottom-left origin (Windows bitmaps style)
            public int origin;
            //
            // Summary:
            //     image ROI. when it is not NULL, this specifies image region to process
            public IntPtr roi;
            //
            // Summary:
            //     ditto
            public IntPtr tileInfo;
            //
            // Summary:
            //     image width in pixels
            public int width;
            //
            // Summary:
            //     size of aligned image row in bytes
            public int widthStep;
        }
    Is it this?

    Tuesday, October 16, 2012 4:55 AM
  • Yes, that's it, but it looks nothing like the IplImage struct found @ http://opencv.willowgarage.com/documentation/basic_structures.html.  I guess there's an inconsistency in OpenCV versions between what you use and what the online document says.

    Best if you look up the definition of IplImage in your C++ OpenCV header files and show it to see if it matches that C# one.


    Jose R. MCP
    Code Samples

    Tuesday, October 16, 2012 5:01 AM
  • Yes are different. I will try using the same version and see the result.
    Tuesday, October 16, 2012 5:14 AM
  • Hi, I corrected the OpenCV versions but now I get this error

    An attempt was made to load a program with an incorrect format. (Exception from HRESULT: 0x8007000B)

    I am using a 64 bit machine and I have selected the correct platform in the project properties, so I don't know why this error comes. 

    Tuesday, October 16, 2012 12:47 PM
  • Is OpenCV offered for 64-bit?  If not you'll be forced to compile for 32-bit only.

    Jose R. MCP
    Code Samples

    Tuesday, October 16, 2012 1:31 PM
  • I'm using OpenCV 2.4.2 which has provided libraries for 64 bit, in the C++ project also I have used the platform as 64 bit and used the proper libraries and made all the dependencies, it gives no errors.

    Also in my C# project I'm using the EmguCV 2.4.2 64 bit, the OpenCV wrapper for C#

    It allows me to compile it
    • Edited by nirmal.91 Tuesday, October 16, 2012 1:57 PM
    • Marked as answer by Mike Feng Wednesday, October 17, 2012 6:19 AM
    Tuesday, October 16, 2012 1:48 PM
  • Thank you for all your help, I am going to take my project onto a 32 bit machine. Because the EmguCV API won't let me use any api functions unless the platform is x64 and the dll only works if the platform selected is x86.
    Thursday, October 18, 2012 4:41 AM
  • Hi Nirmal, 

    i am facing same problem what you have discussed here. I want to know "How did you returned IplImage from C++ function to C#"

    As i have included dll using DLLimport.

    Thursday, March 20, 2014 12:02 PM