태터데스크 관리자

도움말
닫기
적용하기   첫페이지 만들기

태터데스크 메시지

저장하였습니다.

'OpenCV'에 해당되는 글 2건

OpenCV - iPhone

분류없음 2009.10.15 11:59
Creative Commons License
Creative Commons License
[원문보기]

This time OpenCV was ported to the Apple iPhone platform.

First of all we need to compile OpenCV library itself so that it can be used on the iPhone. There are two ways here:

1. Use OpenCV as a private framework.
2. Compile OpenCV as a static library.

First approach looks more comfortable for using, though I was not able to make it work properly on the iPhone (it works fine on the simulator, but not on the real hardware).

But anyway, let’s see how both approaches can be followed.

1. Private framework

Instructions on how to build universal OpenCV framework for simulator and iPhone (this will support i686 and ARM) can be found here.

To add this framework to your application do following:

1. Create new application in the Xcode.
2. Right-click framework group and select “Add -> Existing Frameworks”
3. Select OpenCV.framework folder you have created.
4. In the Xcode menu select “Project -> New Build Phase -> New Copy Files Build Phase”
5. In the opened window select “Frameworks” as destination and close the window.
6. Now expand group “Targets -> your_target” and drug OpenCV.framework from the Frameworks group to the “Copy Files” group under your target.
7. Add “#import <OpenCV/OpenCV.h>” anywhere in your code (not anywhere, but… well…. you know…)
8. You will probably have to change type of your sources – change extension of the source files where you use OpenCV APIs from “.m” to “.mm”

Now you should be able to use OpenCV routines in your application. But again, for me it was working perfectly on the simulator. But on the iPhone application was crashing right after start. I’ll investigate this and add an update later.

2. Static library

This approach is less convenient in using, but it works on both simulator and hardware. So, let’s start.

1. To create static library follow these instructions.
2. Now, when you have five *.a files, to make you life easier, put libraries in the separate folder. Then walk through the sources of OpenCV (folders cv, cvaux, cxvore etc.) and copy all the header files to the separate location. Thus, you will have folder (let’s call it “OpenCV.lib”) with all *.a files and subfolder (let say “hdrs”), which contains all header files.
3. Go ahead and create new application in the XCode.
4. Add all the OpenCV header files to you project – right click “Classes” group and select “Add -> Existing Files” and double-click “/…/OpenCV.lib/hdrs” folder you created on step 2.
5. Somewhere in your code include files cv.h, ml.h and highgui.h.
6. Now double click your target (under “Targets” group) and go to the “Build” tab.
7. In the “Linking” section find option “Other Linker Flags” and add paths to your OpenCV library. This field should look like this: “/…/OpenCV.lib/libcv.a /…/OpenCV.lib/libcvaux.a ” and so on.
8. Ok, you are now ready to go!
9. No, stop. Don’t forget to add libstdc++ library to you project. Other wise you’ll face compilation issues.
10. Well now you are ready.

Few useful notes.

1. OpenCV works with IplImage, while your application will require UIImage for displaying. To convert from IplImage to UIImage you can use following function (thanks to this guy for the function):

  1. -(CGImageRef)getCGImageFromCVImage:(IplImage*)cvImage  
  2. {  
  3.     // define used variables  
  4.     int height, width, step, channels;  
  5.     uchar *data;  
  6.   
  7.     // get the with and height of the used cvImage  
  8.     height = cvImage->height;  
  9.     width = cvImage->width;  
  10.     step = cvImage->widthStep;  
  11.     channels = cvImage->nChannels;  
  12.   
  13.     // create the new image with the flipped colors (BGR to RGB)  
  14.     IplImage* imgForUI = 0;  
  15.     imgForUI = cvCreateImage(cvSize(width, height), 8, 3);  
  16.     cvConvertImage(cvImage, imgForUI, CV_CVTIMG_SWAP_RB);  
  17.   
  18.     // the data with the flipped colors  
  19.     data = (uchar *)imgForUI->imageData;  
  20.   
  21.     // create a CFDataRef  
  22.     CFDataRef imgData = CFDataCreate(NULL, data, imgForUI->imageSize);  
  23.   
  24.     // create a CGDataProvider with the CFDataRef  
  25.     CGDataProviderRef imgDataProvider = CGDataProviderCreateWithCFData (imgData);  
  26.   
  27.     // create a CGImageRef with the CGDataProvider  
  28.     CGImageRef cgImage = CGImageCreate(width,  
  29.                                        height,  
  30.                                        8,  
  31.                                        8*channels,  
  32.                                        step,  
  33.                                        CGColorSpaceCreateDeviceRGB(),  
  34.                                        kCGImageAlphaNone,  
  35.                                        imgDataProvider,  
  36.                                        NULL,  
  37.                                        NO,  
  38.                                        kCGRenderingIntentDefault);  
  39.   
  40.     // release the CGDataProvider  
  41.     CGDataProviderRelease(imgDataProvider);  
  42.   
  43.     // return the new CGImageRef  
  44.     return cgImage;  
  45. }  

Then, use CGimage you got to create UIImage which can be displayed to the user.

2. If you will try to load image using cvLoadImage on the iPhone, process it with OpenCV (e.g. try to find faces there) and then display it, all you will see is a black rectangle with some color junk at the beginning (by the way face detection will not work on this picture – 0 object will be found). This is because cvLoadImage does not work properly on the iPhone hardware for some reasons (though, as usual, everything is fine on the simulator). To cure this try to open image using APIs from iPhone SDK and then convert it to the IplImage (boris, thanks again):

  1. - (void)manipulateOpenCVImagePixelDataWithCGImage:(CGImageRef)inImage openCVimage:(IplImage *)openCVimage  
  2. {  
  3.     // Create the bitmap context  
  4.     CGContextRef cgctx = [self createARGBBitmapContext:inImage];  
  5.     if (cgctx == NULL)  
  6.     {  
  7.         // error creating context  
  8.         return;  
  9.     }  
  10.   
  11.     int height,width,step,channels;  
  12.     uchar *cvdata;  
  13.     //int i,j,k;  
  14.     int x,y;  
  15.   
  16.     height = openCVimage->height;  
  17.     width = openCVimage->width;  
  18.     step = openCVimage->widthStep;  
  19.     channels = openCVimage->nChannels;  
  20.     cvdata = (uchar *)openCVimage->imageData;  
  21.   
  22.     CGRect rect = {{0,0},{width,height}};  
  23.   
  24.     // Draw the image to the bitmap context. Once we draw, the memory  
  25.     // allocated for the context for rendering will then contain the  
  26.     // raw image data in the specified color space.  
  27.     CGContextDrawImage(cgctx, rect, inImage);  
  28.   
  29.     // Now we can get a pointer to the image data associated with the bitmap  
  30.     // context.  
  31.     unsigned char *data = (unsigned char*)CGBitmapContextGetData (cgctx);  
  32.   
  33.     if (data != NULL)  
  34.     {  
  35.         //int counter = 0;  
  36.         for( y = 0; y < height; ++y )  
  37.         {  
  38.             for( x = 0; x < width; ++x )  
  39.             {  
  40.                 cvdata[y*step+x*channels+0] = data[(4*y*width)+(4*x)+3];  
  41.                 cvdata[y*step+x*channels+1] = data[(4*y*width)+(4*x)+2];  
  42.                 cvdata[y*step+x*channels+2] = data[(4*y*width)+(4*x)+1];  
  43.             }  
  44.         }  
  45.     }  
  46.   
  47.     // When finished, release the context  
  48.     CGContextRelease(cgctx);  
  49.     // Free image data memory for the context  
  50.     if (data)  
  51.     {  
  52.         free(data);  
  53.     }  
  54.   
  55. }  
  56.   
  57. - (CGContextRef)createARGBBitmapContext:(CGImageRef)inImage  
  58. {  
  59.     CGContextRef context = NULL;  
  60.     CGColorSpaceRef colorSpace;  
  61.     void * bitmapData;  
  62.     int bitmapByteCount;  
  63.     int bitmapBytesPerRow;  
  64.   
  65.     // Get image width, height. Weíll use the entire image.  
  66.     size_t pixelsWide = CGImageGetWidth(inImage);  
  67.     size_t pixelsHigh = CGImageGetHeight(inImage);  
  68.   
  69.     // Declare the number of bytes per row. Each pixel in the bitmap in this  
  70.     // example is represented by 4 bytes; 8 bits each of red, green, blue, and  
  71.     // alpha.  
  72.     bitmapBytesPerRow = (pixelsWide * 4);  
  73.     bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);  
  74.   
  75.     // Use the generic RGB color space.  
  76.     colorSpace = CGColorSpaceCreateDeviceRGB();  
  77.     if (colorSpace == NULL)  
  78.     {  
  79.         return NULL;  
  80.     }  
  81.   
  82.     // Allocate memory for image data. This is the destination in memory  
  83.     // where any drawing to the bitmap context will be rendered.  
  84.     bitmapData = malloc( bitmapByteCount );  
  85.     if (bitmapData == NULL)  
  86.     {  
  87.         CGColorSpaceRelease( colorSpace );  
  88.         return NULL;  
  89.     }  
  90.   
  91.     // Create the bitmap context. We want pre-multiplied ARGB, 8-bits  
  92.     // per component. Regardless of what the source image format is  
  93.     // (CMYK, Grayscale, and so on) it will be converted over to the format  
  94.     // specified here by CGBitmapContextCreate.  
  95.     context = CGBitmapContextCreate (bitmapData,  
  96.                                      pixelsWide,  
  97.                                      pixelsHigh,  
  98.                                      8, // bits per component  
  99.                                      bitmapBytesPerRow,  
  100.                                      colorSpace,  
  101.                                      kCGImageAlphaPremultipliedFirst);  
  102.     if (context == NULL)  
  103.     {  
  104.         free (bitmapData);  
  105.     }  
  106.   
  107.     // Make sure and release colorspace before returning  
  108.     CGColorSpaceRelease( colorSpace );  
  109.   
  110.     return context;  
  111. }  
  112.   
  113. - (IplImage *)getCVImageFromCGImage:(CGImageRef)cgImage  
  114. {  
  115.     IplImage *newCVImage = cvCreateImage(cvSize(CGImageGetWidth(cgImage), CGImageGetHeight(cgImage)), 8, 3);  
  116.   
  117.     [self manipulateOpenCVImagePixelDataWithCGImage:cgImage openCVimage:newCVImage];  
  118.   
  119.     return newCVImage;  
  120. }  

3. If you are going to change OpenCV itself, adding option “–enable-debug” might be useful for debugging. But note, that this will reduce performance (it might work up to 1.5 times slower). Also, it worth adding it anyway, since if you will enter some OpenCV API in the XCode debugger it might freeze or crash application. Also, if you are using OpenCV as a static library, make sure that all OpenCV headers, which are added to you project, are up-to-date. Otherwise, application might not work properly.

And now the sad part…
Performance of face detection on the iPhone is painful. Processing of VGA image (640×480) with three faces on it takes 6 to 20 seconds (depends on cvHaarDetectObjects parameters). 320×240 image is a bit faster, but still slow – 1-6 seconds.

Okay, that’s all, folks.

신고
블로그 이미지

*별빛*

UI/UX관련 개발이슈 및 방법을 나누는 곳입니다. Flex/AIR, Silverlight등 pc 기반 iPhone, Android등 smartphone 기반

Tag OpenCV
Creative Commons License
Creative Commons License

[원문보기]

OpenCV is a library of computer vision developed by Intel, we can easily detect faces using this library for example. I’d note how to use it with iPhone SDK, including the building scripts and a demo application. Here I attached screen shots from the demo applications.

Getting Started

All source codes and resources are opened and you can get them from my github repository. It includes pre-compiled OpenCV libraries and headers so that you can easily start to test it. If you already have git command, just clone whole repository from github. If not, just take it by zip or tar from download link on github and inflate it.

% git clone git://github.com/niw/iphone_opencv_test.git

After getting source codes, open OpenCVTest.xcodeproj with Xcode, then build it. You will get a demo application on both iPhone Simulator and iPhone device.

Building OpenCV library from source code

You can also make OpenCV library from source code using cross environment compile with gcc. I added some support script so that you can easy to do so. The important point is that iPhone SDK doesn’t support dynamic link like “.framework”. We have to make it as static link library and link it to your application statically.

  1. Getting source code from sourceforge. I tested with opencv-1.1pre1.tar.gz. Note, the other source packages or source codes from svn head don’t work well with this script.

  2. Extract tar.gz in the top of project dir

    % tar xzvf opencv-1.1pre1.tar.gz
    
  3. Edit opencv_build_scripts/configure_*.sh for your enviroment(–prefix etc…) if need.

  4. Build for each platforms(armv6 for iPhone device, sim for iPhone simulator)

    % cd opencv-1.1.0
    % mkdir build_(armv6 or sim)
    % pushd build_(armv6 or sim)
    % ../../opencv_build_scripts/configure_(armv6 or sim).sh
    % make
    % make install
    % popd
    

Converting images between UIImage and IplImage

OpenCV is using IplImage structure for processing, and iPhone SDK using UIImage object to display it on the screen. This means, we need a converter between UIImage and IplImage. Thankfully, we can do with iPhone SDK APIs.

Creating IplImage from UIImage is like this.

// NOTE you SHOULD cvReleaseImage() for the return value when end of the code.
- (IplImage *)CreateIplImageFromUIImage:(UIImage *)image {
  // Getting CGImage from UIImage
  CGImageRef imageRef = image.CGImage;

  CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
  // Creating temporal IplImage for drawing
  IplImage *iplimage = cvCreateImage(
    cvSize(image.size.width,image.size.height), IPL_DEPTH_8U, 4
  );
  // Creating CGContext for temporal IplImage
  CGContextRef contextRef = CGBitmapContextCreate(
    iplimage->imageData, iplimage->width, iplimage->height,
    iplimage->depth, iplimage->widthStep,
    colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrderDefault
  );
  // Drawing CGImage to CGContext
  CGContextDrawImage(
    contextRef,
    CGRectMake(0, 0, image.size.width, image.size.height),
    imageRef
  );
  CGContextRelease(contextRef);
  CGColorSpaceRelease(colorSpace);

  // Creating result IplImage
  IplImage *ret = cvCreateImage(cvGetSize(iplimage), IPL_DEPTH_8U, 3);
  cvCvtColor(iplimage, ret, CV_RGBA2BGR);
  cvReleaseImage(&iplimage);

  return ret;
}

Don’t forget release IplImage after using it by cvReleaseImage!

And creating UIImage from IplImage is like this.

// NOTE You should convert color mode as RGB before passing to this function
- (UIImage *)UIImageFromIplImage:(IplImage *)image {
  CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
  // Allocating the buffer for CGImage
  NSData *data =
    [NSData dataWithBytes:image->imageData length:image->imageSize];
  CGDataProviderRef provider =
    CGDataProviderCreateWithCFData((CFDataRef)data);
  // Creating CGImage from chunk of IplImage
  CGImageRef imageRef = CGImageCreate(
    image->width, image->height,
    image->depth, image->depth * image->nChannels, image->widthStep,
    colorSpace, kCGImageAlphaNone|kCGBitmapByteOrderDefault,
    provider, NULL, false, kCGRenderingIntentDefault
  );
  // Getting UIImage from CGImage
  UIImage *ret = [UIImage imageWithCGImage:imageRef];
  CGImageRelease(imageRef);
  CGDataProviderRelease(provider);
  CGColorSpaceRelease(colorSpace);
  return ret;
}

Ok, now you can enjoy with OpenCV with iPhone!

Frequently Asked Questions

  • I can’t build and run this demo application for iPhone device, I can build and run it on iPhone simulator though… why? I got next error when building.

    ld warning: in /usr/local/lib/libcv.dylib, file is not of required architecture
    ld warning: in /usr/local/lib/libcxcore.dylib, file is not of required architecture
    Undefined symbols:
      "_cvCreateMemStorage", referenced from:
          -[OpenCVTestViewController opencvFaceDetect:] in OpenCVTestViewController.o
      "_cvGetSeqElem", referenced from:
          -[OpenCVTestViewController opencvFaceDetect:] in ......
    
    • Have you ever installed OpenCV for MacOS X? This error is because of the linker using wrong library for MacOS X instead of iPhone device. I solved this problem so that you can now build it for iPhone device. Please git pull or download the package again from github.

One more thing…

I mentioned that the face detection using OpenCV takes very long time. For example detecting with iPhone screen size image, it takes 10 seconds or more…hmmmm

License

This sample is under MIT License.

신고
블로그 이미지

*별빛*

UI/UX관련 개발이슈 및 방법을 나누는 곳입니다. Flex/AIR, Silverlight등 pc 기반 iPhone, Android등 smartphone 기반

Tag OpenCV

티스토리 툴바