Machine Vision

cancel
Showing results for 
Search instead for 
Did you mean: 

Convert a stream image data into image control without IMAQ

Hi all,

 

I'm new to working with LabView and got stuck in a problem that may be easy/silly. I have a pixeLINK camera (PL-D7715) and can successfully connected it and get the stream in a separate window (using the LabView wrappers provided by PixeLINK SDK). I can also get the current frame. The problem is for some development and deployment reasons we don't have Vision Acquisition Software (VAS) installed and my aim is to get that frame and display it on a control on the UI. The UI should display

  1. Live view of current view
  2. Image taken at a specific time from the stream from item (1) and enhance it and then display it as a second control

The function getNextFrame provides the Data Out as an array (1D) which I need to convert and display as an image (probably Draw Flattened Pixmap)? I can either use this to display in a loop or use the pixeLINK setStream (but that only opens in  a separate window maximized). The description of getNextFrame input/output are also copied below for a quick look. Any help regarding the above will be greatly appreciated please please?

 

 

Controls and Indicators

      hCamera IN is the camera handle. This value is returned by Initialize.vi and is passed between PixeLINK VIs as a reference.

       uBufferSize IN The size of the image buffer required in bytes. If Mode IN is set to UsePointer, uBufferSize should indicate the size of the buffer pointed to by pPixel IN. This buffer must be large enough to hold the requested image data.

       OutputMode IN Determines whether the data is passed in the form delivered by the camera (default), converted to RGB32 compatible with the NI IMAQ RGB image format or converted to a RGB24 buffer. Use RGB32 when connected to a Color camera in Raw8 (Bayer 😎 or Raw16 (Bayer 16) mode with an IMAQ image type set to RGB. Use RGB32 when connected to a Color camera and using an array. Note that the buffer size required must be set for the number of RGB pixels at one byte per pixel. For monochrome cameras, use IMAQ Image types of MONO8 or MONO16 and set OutputMode IN to Default. OutputMode IN = RGB32 is relevant only if Mode In is set to UserPointerOutputMode IN = RGB24 is relevant only if Mode IN is set to UserArray.

       Mode IN Determines whether GetNextFrame returns an array containing the image data or fills the data at a location indicated by Pixel IN. If set to UseArray, pPixel IN will be ignored. If set to UsePointer, then data OUT will be empty.

      pPixel IN A pointer to a image buffer of sufficient size to hold the image data. Ignored Mode IN is set to UseArray.

      pDescriptor A structure containing descriptive information about the frame returned from the camera. It contains the values of all camera settings used to capture the image. See the API reference manual for more information. 

      hCamera OUT has the same value as hCamera IN.

      uBufferSize OUT A pass-through of the uBufferSize IN variable.

     Data OUT An array of image data returned from the camera. The data will only be valid if Mode IN is set to UseArray. To interpret the data, check the PixeL Format settings in pDescriptor

 

0 Kudos
Message 1 of 5
(2,928 Views)

I just want to warn you that even if you get this code working without IMAQ it will probably be very heavy on your CPU and memory.

Converting the image to Array instead of displaying the image directly is not efficient.

This operation allocate memory in the size of the image. I would suspect that it is also not efficient on the CPU.

USB 3.0 camera can generate a lot of data. So you might want to keep checking resource usage if you get this code working.

Amit Shachaf
Message 2 of 5
(2,904 Views)

Thanks Amit. That was my suspicion even though I'm very new to LabView but according to PixeLINK support, they had this functionality as just basic testing and they directed me to the forum for a possible solution.

 

One possibility I'm thinking of is to put a wait time for every frame (getting frame immediately after the previous isn't a restriction for our application) but again I'm not sure how much will that help.

 

So the output of getNextStream is a 1D array which will take a lot of resources to be converted to an image and then displayed on a control? I just want to confirm that I'm understanding it correctly.

 

yes actually the image is 15MP and very large in size (about 44MB) when I save it from the SDK directly so I assume it's similar in size in LabView too.

0 Kudos
Message 3 of 5
(2,895 Views)

In LabVIEW image that is transferred as image control is transferred as pointer

Image that is transferred as array is transferred by value. 

LabVIEW will automatically allocate additional image buffer when transfer by value.

Making operations on images as pointer operations is way more efficient. 

You have very large image in 15 FPS. That is a lot of resources.

Amit Shachaf
0 Kudos
Message 4 of 5
(2,890 Views)

OK got it thanks. Unfortunately, I do not have any clue about images as arrays/values and pointers and I need to go and read about these before I become able to do anything more on the application. PixeLINK getNextFrame does both as pointer and as array but as I mentioned previously (i) they said that their functionality is for just basic testing and I shouldn't rely heavily on that, and (ii) I'm very new to LabView and still learning on even the basic things in such a short timespan I have for our project. In the meantime, if you or other members have any exciting/helping material on this issue for me to read and play with, I'll be much grateful for that and it will help me save a lot of time.

Yes you're right it's a huge size and we're contemplating on possible options to reduce the size of the image. Some of them are (i) compress and/or save as grayscale only, (ii) save only a specific ROI instead of the whole image, (iii) expand the time intervals to save the images.

0 Kudos
Message 5 of 5
(2,882 Views)