09-21-2007 03:41 PM
09-22-2007 03:34 AM
09-24-2007 09:12 AM
09-24-2007 09:47 AM - edited 09-24-2007 09:47 AM
Hi there Chris,
Really sorry about this.
The IMAQ driver is used for frame grabbers, including Full confuiguration CameraLink (up to 5400Mb/s) aquisition which will comonly use a buffer of this magnitude. Therefore NI saught to overcome this memory limitation inside of the IMAQ driver.
The IMAQdx driver however, was only intended for use with Firewire (400 or 800Mb/s) and GigE (1000Mb/s) cameras and as such we didn't actually impliment the workaround to the memory problem. As the implimentation of this driver was not to overome this memory problem and that the driver is only 32bit, even moving to Vista right now will not change this, however if and when we release the fully compatable 64 bit driver I would fully expect this limitation to be looked at more closely.
I'm afraid there is no way to get around this limitation with IMAQdx on Vista or XP at the moment.
Regards,
AdamB
Message Edited by AdamB on 09-24-2007 09:47 AM
09-24-2007 10:02 AM
11-10-2009 04:09 AM
Deaar Chris,
I am having the same problem.
For sure Windows is not responsible at all because you can double the number of acquired images through changing the acquisition parameters inside LabVIEW.
When calling IMAQdx Configure Acquisition you can select 'Continuous' instead of 'One Shot' and define a limited amount of buffers (for example 10) that are used as a breath room only. Even if this procedure is not recommended in the examples and manual for 'sequence' acquisition, it works well (as far as I tried).
The problem is more general.
Have a look to the attached files. Run 'Sequence_tempi.vi' as is, then remove 'sequence_modif_Ema.vi' end put 'IMAQdx Sequence.vi' instead (in both the cases define for example 200 images to be acquired).
If you run the task manager before running the mentioned VIs you can realise that the RAM is allocated DURING the acquisition using 'sequence_modif_Ema.vi' , while it is allocated TWO TIMES!!! using 'IMAQdx Sequence.vi'.
The problem now is: it is not formally correct to allocate the RAM during the acquisition, however allocate it two times is even worst.
Do you have any idea?
Bye,
Ema
11-10-2009 10:24 AM
Hi all,
I thought I'd try to add something to this thread since there is a bit of misinformation in some of the posts and perhaps I can lend some clarification to the behavior you are seeing.
Firstly, with regards to AdamB's posts, there are no fundamental memory limitations present in IMAQdx compared to IMAQ with respect to how many buffers you can allocate. The only "workaround" present is that there are some limits that Windows imposes on the maximum size of a single DMA'able segment, which both the IMAQ and IMAQdx driver work around transparently to the end-user. This would previously limit you to <64MB per image (but have no relevance to how many you could allocate), but as I mentioned, current versions of IMAQ and IMAQdx work around this seamlessly to allow images larger than this.
What you are likely running into with memory limitations is due to the 32bit memory space of a 32-bit application. On a normal 32-bit Windows, this will limit the entire address space to 2GB, although in a typical application that space is fairly fragmented with various DLLs loading in the middle of that space and scatted allocations. Especially when you start dealing with larger image sizes, it is typically that allocations start to fail for a virtually contiguous block well before the total space is filled, simply because there is no blocks free of a large enough size. I've discussed this before, but there are other things you try such as booting with the /3GB or running on a 64bit OS that will give the same application a 3 or 4GB memory space automatically. If you were to switch to a 64-bit version of LabVIEW as well, then that address space limit becomes virtually infinite.
Now, one big difference between the IMAQ and IMAQdx APIs is that IMAQ has both Extract and Grab primitives available. With Extract, the buffers allocated by the user are directly DMA'd to by the hardware and when the user wants to access them, they "extract" them from the hardware and directly access them. With the Grab API, the user requests an image and the image is automatically extracted, copied into a new buffer the user provides, and then released back to the hardware. This is often used in situations with IMAQ where you want to ensure that the hardware does not lose images if your processing takes too long.
In IMAQdx, however, there is only a Grab API currently available rather than an Extract. There are several reasons for this. Typically camera interfaces that IMAQdx supports (1394, GigE Vision, DirectShow) do not support direct acquisition into the user's end-buffers. This may be due to several reasons, with one of the most common being that the data format the camera transfers is not in the same format that is used for processing. The image must often be decoded first (unpacking, endianness conversion, shifting, bayer decode, yuv decode, etc). Also, if the image cannot be DMA'd directly into the user's buffer due to hardware restrictions of commodity hardware, it is silly to waste processing power "emulating" a direct-to-user-buffer acquisition in case the user ends up never accessing a particular buffer. With the Grab API, you only pay the decode/copy CPU penalty for images that you actually access.
Ema, the "double" memory you are seeing is most likely a result of how the high-level sequence VIs work and the underlying Grab mechanism in IMAQdx. When you configure the number of buffers used for the acquisition in the configure step, that number controls how many raw buffers the driver allocates internally (for non-decoded image data). However, the high-level sequence also takes in the same number of buffers of end-user decoded images that it decodes into after the acquisition finishes. Depending on the type of native image format and the decoded image format, it is possible that both have the same size and thus you are seeing a "doubling" of image data since the raw and decoded images both take up the same space.
As for how you would structure your application if you are running into memory limitations with the Sequence, here are a few ideas:
- Keep the images in IMAQdx's buffer ring and grab/decode them on-demand into a single image buffer as you need to access them. If you take a look at the Low-Level Sequence example provided it should give you some idea how to do this. That example still does allocate an entire array of images, but you can see how it decodes each one in a loop. This will limit the number of user-allocated images at the expense of more time needed to access the image each time if you have to access them more than once.
- Run a continuous acquisition with a comfortable number of image buffers and then grab/decode them into a array of images of the size of your total acquisition. This will limit the number of driver-allocated images at the expense of having to ensure that your CPU can keep up acquiring and decoding the images during the acquisition.
Hope this helps,
Eric