I am loosing Gige Frames/Buffers in a system that synchronously acquires analogue readings and images over long periods.
Computer: Nuvo 1300af 620, ( Quad i7, 2.66GHz processer, 5 x GigE POE ports, 3GB RAM, 256GB SSD), Windows XP embedded, NI Image Acquisition August 2012. NiDaqmx, Labview 2009 pro
Camera: Basler Runner Linescan RL2048-10gm
Data Acquisition: cDaq Chassis with NI 9905 32-chan analogue input, NI 9401 Dig I/O.
The camera and the data acquisition modules use separate GigE ports that have different group addresses. All other GigE ports are disabled.
The LabView program has 2 while loops that run simultaneously. The linescan camera runs at a speed of 3250 lines per second, and has a frame of 2048x384 (8.46 frames per second). A NI9905 cDaq data acquisition module synchronously acquires 18 channels of analogue readings at 3250 samples/second
While Loop 1: Contains a Grab vi and an Enqueue Element vi. The Grab vi waits for the next buffer, then enqueues it for use tin While loop 2. The loop time is set by the wait for the next buffer (118ms) This loop has no other VIs.
While loop 2: Contains a dequeue VI, that waits for the next element from the queue, setting the loop time. This loop also contains a NiDaqmx Read VI, that reads all analogue readings that are available each time around the loop. Images and analogue readings are processed and are displayed and immediately written to the Solid State disk in 2 files (1 analogue data file file and.one of several image files). Writing to image files is organised so that these files do not exceed 1GB in size: when one image file is full, writing is switched to the next file. All files are created and opened at program start-up.
When running, at random intervals, a group of image buffers is lost.
Monitoring the Grab VI in while loop 1, it shows that when running correctly, the buffer numbers increment by 1 each time it reads a buffer, When buffers are lost, the buffer number jumps by a value from 2 up to over 10. No errors are shown. There are no changes in the processing in While loop 2 at the time that buffers are lost. Increasing the number of buffers (up to 100!) does not help
Montoring the time that that the dequeue VI in while loop2 waits for the next element for the queue, shows that this wait time averages at about 60% of the loop time, and has a minimum of 40%, except when buffers are lost, when the wait time goes momentarily to zero.
Windows performance monitors show that CPU usage is less than 20%, and network usage is less than 1%
Solved! Go to Solution.
How are you enqueueing this data? Are you passing the IMAQ session reference or are you converting the data to an array first? Are you writing this data to TDMS file? If not, how are you writing it to file? It is possible that you may not be doing your file I/O fast enough to keep up. Can you post your code for me to take a look at and see if anything jumps out at me?
Thanks for your response.
The image is converted to an U8 2D array, and saved using the Write to Binary File VI. To save overheads, the files are created at start-up, new frames are appended, and the file is not closed until the program ends. The disk is a SSD to prevent write delay problems encountered before, when the HDD became very fragmented.
The program is a new version of a program that we have been using for several years on a core2 duo 2GHZ processor, with a Camera Link camera and IMAQ, with no problems, except for fragmented disks.
The problem we have has the same symptoms as described in a discussion forum/knowledge base that I found (but can't find again!), where GigE Frames are lost without any reported error. This apparently was due to a ''heatbeat"package timeout. This was apparently fixed in later versions of the vision acquisition software, where the timeout was 2 seconds and subsequently increased to 5 seconds. I downloaded a hotfix, but I am scared to install it, as it modifys the registry to increase the timeout for earlier versions of the Vision Acquisition . This hotfix is 384042_ENU_i386_zip.exe.
The program is large and complex, so I will need to simplify it before posting it to you.
How long are the random intervals it takes for you to start losing frames? Are we talking around a range of 1 min, 10, min, 1 hr, etc? Do you get proper acquisition in MAX or do you lose frames? Try lowering the Peak Bandwidth Desired of your camera in MAX and see if you get more or less loss or no loss at all (MAX>>(Your Camera)>>Camera Attributes Tab>>Acquisition Attributes>>Advanced Ethernet>>Bandwidth Control). Does this look like the forum you saw? Where did you find the hotfix you have?
I found a solution that fixes my problem.
I thought that the Grab VI, when set to acquire the Next buffer, returns all acquired buffers in sequence, unless a buffer is overwritten. Instead, if the buffers are not grabbed fast enough, it does not return the next buffer, but the last one acquired, and reports no error. I had set the number of buffers to 25 or more, so that overwriting buffers was not a possibility.
I changed the program so that maintains a buffer number count (incrementing the buffer number on each Grab) and specifies this buffer number to the Grab VI. This works well, even on occasions when the acquisition loop is occasionaly momentarily delayed by up to a second or more, by (I suspect), Windows writing disk buffer contents to the disk, or doing some other housekeeping, .