From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Network stream stuck at last image and writer timed out

Hi Jenda,

 

I just read something, I am sure you must have looked into it as well. But there is this IMAQdx configuration VI, there is an input selector that specifies "number of buffer" and "mode" of acquisition. Did you try configuring your acquisition? I did not see this VI on your block diagram.(I am not sure if this how it is supposed to be done, but based on how it generally works) You may want to place this VI right after Open Camera VI. 

Xonmyth743_2-1627756313282.png

 

 

Xonmyth743_0-1627755983962.png

 

Open Camera- Configure Acquisition -(Collect) - Close

 

EDIT: I would leave everything as is so far and just add this configuration VI.

 

0 Kudos
Message 21 of 29
(1,207 Views)

Hi, 

Yes I tried this one. It had zero impact on the performace so I didn´t include it afterwards.  Configuration is probably figured or set as a default one while using Snap.vi. So mode one-shot and buffer size 1. 

0 Kudos
Message 22 of 29
(1,193 Views)

ok soooo I´m just dumb...

You don´t have to configure  when using snap because its set to 1 by default. What you have to do is also set buffer size on create writer/reader block to 1. So after you put image to buffer and send it with writer, it empties buffer and you can Snap new image. Instead of having buffer size about 30k so its puting new images in there and that allocation of memory causes the memory leak.  This was written in this article starting at endpoint buffers.  I´ve read this many times but my brain chose to ignore this important part. It even works when you put image as a data type instead of array for added cost of few more % of CPU utilization and higher network usage. Including VI of both variants.

 

Only thing that still bugs me is that the frequency  of sending still feels off. Resolution is set at 640x480 30fps and at 33 ms loop time I shoud be getting/sending  30 fps but image is still choppy and it looks like its sending 5 fps max as seen in this gif. And Im probably right. Added counter on top so it counts how fast it aquires images. Error down there is from previous try where I forgot to switch to right webcam.

Is there something I can do about that od did I miss something again? 

 

 

 

Message 23 of 29
(1,181 Views)

Glad it worked out for you! I was just setting up my personal computer to test image acquisition. I might still do it regardless and will post my findings.


@Jenda02 wrote:

ok soooo I´m just dumb...

You don´t have to configure  when using snap because its set to 1 by default. What you have to do is also set buffer size on create writer/reader block to 1. So after you put image to buffer and send it with writer, it empties buffer and you can Snap new image. Instead of having buffer size about 30k so its puting new images in there and that allocation of memory causes the memory leak.  This was written in this article starting at endpoint buffers.  I´ve read this many times but my brain chose to ignore this important part. It even works when you put image as a data type instead of array for added cost of few more % of CPU utilization and higher network usage. Including VI of both variants.

 

Only thing that still bugs me is that the frequency  of sending still feels off. Resolution is set at 640x480 30fps and at 33 ms loop time I shoud be getting/sending  30 fps but image is still choppy and it looks like its sending 5 fps max as seen in this gif. And Im probably right. Added counter on top so it counts how fast it aquires images. Error down there is from previous try where I forgot to switch to right webcam.

Is there something I can do about that od did I miss something again? 

 

No clue, yet.

 


 

0 Kudos
Message 24 of 29
(1,176 Views)

Hello, Jenda02.

 

     From your last response, I think I know your problem.  But you need to "help me to help you".  You should be developing your code in the context of a LabVIEW Project.  Generally, all the files you create, both Host and Remote, in a Real-Time Project are in a common Folder.  Please compress this Folder (right-click the Folder and choose "Compress"), which makes a .ZIP file that you can attach to your reply.  This will allow me to see exactly what you are doing, and if I am right, I can (probably) solve your problem.

 

Bob Schor

0 Kudos
Message 25 of 29
(1,165 Views)

Hello, 

I´ve attached compressed folders of both projects(data type image, array). 

J.

Download All
0 Kudos
Message 26 of 29
(1,156 Views)

Hello, 

Any luck in finding why is it behaving like this and not aquiring more frames?  

J. 

0 Kudos
Message 27 of 29
(1,129 Views)

Now I'm confused.  I thought you were trying to acquire video images, not stills, and were having a problem with frame rate and dropping frames.  I was really hoping that I could pin this down to a LabVIEW Vision misconception (LabVIEW Vision, trust me, is not at all "obvious") having to do with Continuous Acquisition and the concept of "Buffers", but your code shows only Snaps.

 

I was really trying to avoid having to write code for my own WebCam and try to transfer an Image, myself.  I've got a few other tasks that need to take precedence, but will try to write a "Transfer an Image from myRIO to Host" routine sometime soon.  Please be patient ...

 

Has noone else on the Forums attempted to connect a WebCam to a myRIO?

 

Bob Schor

0 Kudos
Message 28 of 29
(1,119 Views)

So I have no experience with Vision (sorry...) but I have spent some time fiddling with Network Streams (mixed opinion).

 

This article might be helpful: Lossless Communication with Network Streams: Components, Architecture, and Performance 

In particular, in the performance section it discusses memory allocation for non-scalar datatypes.

 

Given that you know the resolution of your images (and if this were dynamic, you could conceivably include information relating to this separately, either in another Stream or via some sort of manually applied header) I'd suggest perhaps using Reshape Array on the RT side to get a 1D array of ints, and use the 'pre-allocate' method with an Initialized Array of the appropriate size when creating the stream writer. (allocate before the connecting loop and pass in, don't initialize array in the loop).

On the reader, the setting is less critical, because you probably are far less resource constrained on desktop. Increasing the "read buffer size" will basically allow the myRIO to push the data to your desktop and then have it sit in the reader's buffer until the PC handles it, removing it from the more constrained writer's buffer).

 

I'd also suggest increasing the buffer allocation beyond 1 (but be aware that this will multiply the memory required). So maybe 2 or 3 or 4 is fine. This allows (some) delays in your network stack without preventing smooth streaming out of the loop. (Separately, you have timeout = -1 for that loop, so it should only run once, I think... but the loop might still help with other errors than timeout, I don't recall... 😕 )

 

Since you have an error shift register in your top RT loop, but no way of clearing the error, if an error does occur then probably your acquisition will silently stop and you'll just be sending empty elements. Either have the error stop the loop, or use Clear Error inside the loop (and presumably use some indication, like a counter or an array, to track how many/if errors have occurred).


GCentral
0 Kudos
Message 29 of 29
(1,113 Views)