LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

mikrotron camera & Measurement and Automation Explorer: reduce fps and control exposure rate

I am having problems setting the frame rate per second and exposure time. 

 

1) In continuous acquisition mode, frame rate per second is always set to the maximum value for a given size of the image. I want to bring frp down so that I can have longer exposure time, but I cannot find a way to do it. 

 

2) In SingleFrame acquisition mode, the exposure raw time shouldn't be constrained any more. However, there is a maximum value of 1023. I wonder what unit is this, where it comes from, and whether this can be altered. 

 

 

 

0 Kudos
Message 1 of 12
(3,531 Views)

Hi dj327,

 

Can I ask why you want to use MAX to control the exposure rate? MAX is a tool for configuring and testing hardware, and may not have all the tools required for your hardware.  Are you using any form of LabVIEW or other NI software to control the cameras?

 

I can see that you are using a mikrotron camera. What model camera are you using?

 

Thanks, 

DanC12

Message 2 of 12
(3,485 Views)

Hi DanC12, 

 

Thanks to replying. I just got started with Ni MAX and Labview programming, and I was under the impression that if a control option does not exist in Ni Max, neither will Labview do any help. Let me know if this is not correct. 

 

I am using Mikrotron 1324. 

 

And I used the wrong word here. It should be exposure time not exposure rate. 

 

Thanks, 

Di

0 Kudos
Message 3 of 12
(3,474 Views)

It depends on the product. With a lot of NI own hardware, you can have full control of the hardware, but with non-NI hardware this can vary quite a lot. The majority of non-NI hardware doesn't show up in MAX. Those that do show up may not have full control options, just some simple configuration tools.   I have not used the camera before, so it's hard to say whether it what will show up in MAX, but by the sounds of it, the functionality you were looking for isn't available in MAX, but if you know how to set up the acquisition in LabVIEW, you should be able to alter the FPS and exposure time. Contacting the Mikrotron supplier/manufacturer  would also be useful. 

 

If you are unsure or not fully confident with setting up the camera in LabVIEW( (i know you said you have jsut got started) there are plenty of examples on ni.com/community which will help. You will have to search for things such as vision acquistion or cameras.

 

Thanks, 

DanC12

0 Kudos
Message 4 of 12
(3,444 Views)

I have emailed Mikrotron. They said that I need to send serial commands to the camera, and to change framerate and exposure time I need to write to register 2 and 6. I am unfamiliar with serial commands, I wonder what function in Labview allows you to write to the camera?

 

Thanks, 

Di

0 Kudos
Message 5 of 12
(3,412 Views)

Hi Danc12, 

 

I have made some progress on understanding GiGE vision camera and labview, and now I can define my question more clearly. 

I realized the problem I had is actually not to reduce the fps so I can increase exposure time to gain more light. If the exposure time is set to the maximum (=time interval between frames), then the image I take at 1 second invervals should appear similar to if I overlay two images I take at 0.5s intervals. Therefore it seems to be a silly idea to reduce fps just to make the image brighter. 

 

I have tried various VI example for grabbing and saving AVI videos, and I have encountered another problem: keeping fps in the videos as high as the acquisition rate. As the loop rate of the VI file is always slower than the camera's fps, I lose significant number of buffers. I wonder if there is a solution to this. 

 

Thanks, 

Di

0 Kudos
Message 6 of 12
(3,386 Views)

Are the options greyed out in MAX?  I'm assuming you can see them.  A lot (most?  all?) of these settings are idiosyncratic to the Camera Manufacturer -- does the manual for the camera suggest that these settings are variables (as opposed to "determined by the size of the image", for example)?

 

BS

0 Kudos
Message 7 of 12
(3,374 Views)

Sorry, I didn't realize (when I posted the previous post) that there were other replies, and that most of my comments had been addressed.

 

Now your seem to be asking about how to acquire and save AVI data at reasonable frame rates.  We have been collecting video from up to 24 Axis camera, all running at 24 fps.  We typically collect 5-10 second videos that capture a specified event that occurs every few minutes, so at any one time, only a few of the cameras are expected to be writing to disk.

 

We use a Producer/Consumer architecture to handle saving the video.  Each camera is allocated sufficient buffer space (determined, in part, by "experimentation").  We save the camera buffer IDs in a fixed length lossy Queue (Producer), and when an Event occurs, start dumping the Queue (Consumer) as an AVI file to disk.  It's been a while since I looked at this code, but I think that we simultaneously switch the Producer to utilize a second Queue (the first Queue serves as our "Images before the Event", and the new Queue is our "images on or after the Event") and continue Producing and Consuming until we satisfy our "so-many-seconds-after-Event" video criterion.

 

We have no trouble (on a 5-year-old quad core PC running Windows 7) keeping up with 24 simultaneous stations all running at 24 fps and all running asynchronously.  Well, to be honest, sometimes one or two stations (in a 2-hour recording session) can error out, but this is not necessarily a video problem -- as often as not, the "weak link" appears to be the serial traffic coming from our "Event detector".

 

Bob Schor

0 Kudos
Message 8 of 12
(3,368 Views)

Hi Bob, 

 

Thanks for your reply. For my project, I only need to capture an event for a few seconds but at relatively high fps ~50, and I need to know the time interval between frames. I think your code is more complicated because you need to trigger image capturing when an event happens. But I think the concept of your code is applicable to my case, if I understand it correctly -- allocating enough buffer space for images before hand, so no image is overwritten during the writing of the AVI.

 

But the problem is I just got started with Labview and I am still incompetent for writing my own code. I am currently downloading examples from the NIMAX forum. I have found some examples that address the high speed acquisition problem. For example, this one: "acquiring images from high speed camera and save them to a binary file" (https://decibel.ni.com/content/docs/DOC-20952), but it requires the IMAQdx driver in addition to labview. I wonder if this is the common case for high level acquisition, or there is a way around it, producing the same result with Labview only. 

 

Thanks, 

Di

0 Kudos
Message 9 of 12
(3,356 Views)

Dear Di,

 

     If you really want to do (relatively) high-speed video and image capture, you really need the Vision Toolbox, and IMAQdx.  For a single camera, recording at 50 fps should be possible (though it will depend on such things as the image size, the image color-depth, the speed of your PC, etc.) -- if we can save images from 24 cameras at 30 fps (albeit not all saving at once, but possibly 2 or three at once for a period of seconds), you should (Famous Last Words!) be able to save from a single camera at 50 fps without losing any frames.  You can sometimes tell if frames are missing, either by jumps in the image or, if the camera includes a Frame Counter or a TimeStamp in the image, you can look for "gaps".

 

Bob Schor

Message 10 of 12
(3,342 Views)