11-24-2010 04:30 PM - edited 11-24-2010 04:34 PM
Hi,
Thanks for the clarification - I misunderstood your situation, and thanks for the explanation over "packet" definition.
Since past 1 and a half months, I have been reading a lot of articles and forums about Shared Variables (SV) and have developed my understanding of the SV and DataSocket(DS)+SV.
And...
Just when I thought I had a handle on things, I'm back to running in circles again...
Yes, today I again started reading this forum because I wanted to reply to your post (but only after enough study) and coincidently I am also dealing with a Double Waveform now 😄 and I had the exact same feeling as your above quoted words.
Until a few months back, I used only Datasocket for data communication. After running into issues, I started exploring Shared Variables to replace dependency on Datasocket. In my experiments I could buffer Shared Variables for String other simpler types, while using bound shared variables (defined in project library and used as nodes) and not while I accessed them via "PSP" URL in Datasocket functions. The reason, I assume, is that while using DS to read SV via PSP, we effectively eliminate buffering functionality of shared variable engine and datasocket server manager. If we use SV nodes (by drag and drop from project explorer), the functionalities of the node (publisher or subscriber) will consider the parameters defined in variable properties. While using DS nodes, the DS functions will consider DSServerManager parameters. But the PSP URL will just refer to the shared variable as one data item and not use its buffering properties. This is what I assumed.
And I thought, by using the property nodes of Datasocket, I can define buffers, as an image in the article Buffered Network-Published Shared Variables: Components and Architecture, hinted me. But surprisingly the buffering didn't happen. And, your post rang bells in my mind when I realized that the datatype I was dealing with is Double Waveform, and although I defined BufferMaxPackets explicitly in program they were not being buffered. In fact, I used another property "Buffer Utilization (Packets)" at the reader side, and surprisingly, it was not even using 1 packet. And now today (after reading your posts again) I could understand that you were running into the same issue. Also, today I opened your programs in LV2010, so could confirm what you meant. Till now I was thinking you only set buffering parameters in DS Server Manager.
The above article is good in order to understand the buffering system, and another article has good information too, about Using Shared Variables. Although I suppose you might have read these articles.
I am using double waveform with single channel, for sound data. Actually it's an audio/video chat like application, and the video part (if alone) works smoothly (I flatten the image and send the frames via datasocket to shared variables), but when I try to send sound data, the problem happens. I have figured out good combination of No. of Samples (25000) and Sample Rate(15000), but the sound data makes image frame rate very slow. And experimenting with buffered shared variables didn't quite solve the issue. I was surprised that a smooth network stream of images required only 1 packet in the buffer (means, no buffering). For sound it also didn't use buffering (although defined) but the combination is not good.
I could not understand your program very much. Have you found a "good" (acceptable) workaround to your problem of buffering? Or perhaps it's not a buffering issue at all, as I see no buffered packets being utilized.
Looking forward to your comments.
11-29-2010 06:53 AM
I will have to give a brief and probably dissapointing answer. Although the VIs that I had posted were working well the last time I looked at this topic, I have since ran into a problem. I had to migrate the system to new computers (which are similar to the previous ones), but with a much increased channel count and acquisition rate. At 40 + channels and a 5 kHz acquisition rate, the data on my Reader was running as much as 10 seconds behind.
I admittedly was a little confused by what I had done at this point, since it had been quite awhile since I had created them or even looked at the algorithm, so I decided to give up on ideal operatoin and go for good enough. The requirement was only to view real time data on the reader, sending and saving the complete data was something I had included as one of my requirements, but in reality, we could live without it. I killed buffering all together and just sent enough data to view. This meant I had to do much more calculating on the Writer, which defeated my initial goal of making this an acquisition only machine, and now have a very small file on the reader with an incorrect timestamp, but it is still somewhat of a backup file and still meets the initial minimum requirements.
I'm afraid none of this is of any help to you, and I still think that NI could do some work on their buffering options in everything from Queues to Shared Variables.
11-29-2010 09:09 AM - edited 11-29-2010 09:10 AM
Hi,
@deskpilot wrote:
I'm afraid none of this is of any help to you, and I still think that NI could do some work on their buffering options in everything from Queues to Shared Variables.
Exactly my wish. There are buffering options in everything of these, yet, most of the times in situations not so easy yet important they would not work for one reason or the other.
In my case, I need to transmit continuous video and voice streams between two users. So if any of them is faster/slower than the other, the program is not going to work. However, my computers are normal Vista/XP computers, not too old. More importantly, the programs are not going to be run on some specific computers, but rather any user (to whom we distribute the system) would run them. So if one is faster than the other, the data will accumulate at one point or will lapse at the other point and it's not going to be very good even if there is "working" buffer. The only reason I would want to define buffer and wish it would work, is during the starting of the streaming - when the stream is started to be written in the variable, and the reader (though started/initialized/running) is not yet able to start reading it (due to the delay in connection and data transmission speed). So for that reason, I would like to have that buffer, that the buffer should be able to handle this initial (connection) delay. I am seeing the situation like this. Not sure if right or wrong.
And I don't see any good example where sound would be continuously transmitted to a remote location.
Speaking of Lossless data transfer, have you considered the much claimed "Network Streams" ? After failing in my attempts (to get quality sound stream) with shared variables, I considered Network Streams (NS) with a big hope. But it shattered too, when I saw that in order to define the endpoints, I need to give IP address (or DNS) of the computers having the endpoints. Which means, defining the points in the program itself. Now, if a user arbitrarily starts running the program in "any" computer connected to the Internet, my DS and SV methodology would run well because they refer to one centre point which does not change, but in NS they should be referring to another user's IP/DNS which is not always known (though I can find and concatenate strings in the run-time the problem would be not all computers will have a public IP accessible from anywhere, and not sure about finding DNS programmatically - didn't spend time in that yet and still if I find DNS and if it's ok, I would need "some" way - probably DS or SV - to send this address to the other program running on the other end point to attach it to the string of the "create endpoint" function).
But in your case it seems you have two fixed computers/targets to read/write data, so you may consider NS option.
I would post my programs so that you or the community can analyse what is wrong and what could be done. Buffering actually happens, but not very effectively, no matter what parameter values I put in "BufferMaxPackets" in the datasocket property functions.
11-30-2010 11:41 AM
This post contains responses to questions from multiple previous posts from multiple people. Hopefully it won't get too confusing.
Let me try and shed some light on the BufferMaxBytes and BufferMaxPackets properties. The term packets is a little cryptic, and it's probably easier if you think of packets as the number of elements written through the DataSocket Write VI. If BufferMaxPackets is set to 10 and you write elements of type double, the client buffer will grow and hold at most 10 doubles. The BufferMaxBytes property works in conjunction with the BufferMaxPackets property to limit the maximum size of the client buffer. In effect, they act as a floor function. Continuing with the previous example, if BufferMaxBytes was set to 72, the client buffer would only be able to hold up to 9 elements instead of 10 since 9 doubles will consume 72 bytes worth of space. This isn't particularly useful for scalar data types, but it becomes more important for data types like arrays, strings, clusters, and waveform data types. For these data types, an individual element (or packet) can be very small (1D array of size 2) or very large (1D array of size 1,000,000). In this case, it's often more convenient to limit the maximum size in terms of bytes rather than packets. The example VIs located at <LV Dir>\examples\comm\datasktbuf.llb do a reasonable job of demonstrating how these two settings can effect streaming performance and whether or not data gets lost or overwritten.
If you're using waveform data type (WDT) or arrays as your element type, each write constitutes a packet or element of data and packets are never combined. This means if you write four 1D array elements of size 250, you will always read four 1D array elements of size 250 on the other end. You can't collapse the packets and read a single 1D array of size 1000. This is true for both the PSP (Shared Variable) or DSTP protocol. Collapsing can be accomplished to a degree with network streams since it supports both single element and multi-element reads and writes. In this case, you would configure the element type of the stream to be a double and use the multi-element read/write to vary the number of elements read and written on each side. If you set the element type of the stream to 1D array and use the single element read/write, you'll get the same behavior you do with the PSP and DSTP protocol. However, it still won't work for WDT natively. To do this, we'd have to track the dt, t0, and number of samples included in each waveform element and make sure the written elements were contiguous such that they could be read out in smaller or larger chunks than they were written into the stream. This is a pretty expensive operation to perform with each read and write which is why didn't try to support it. If you know your data is contiguous, it's always going to be more efficient for you to transfer the t0 and dt up front and simply deal with the array portion of the WDT from there on out.
You can use the DataSocket VIs to read and write items published by either the PSP or DSTP protocol and enable client side buffering for either protocol. To do this, you have to remember to set the BufferMaxPackets and BufferMaxBytes properties described above as well as set the appropriate mode on the Open VI to enable buffering for just reads or for both reads and writes on the connection. If you're using the Dynamic Variable API, you can only read/write points published through the PSP protocol. In LV 2010, you can now set the size of the client side buffer by using the Open and Verify Variable Connection or Open Variable Connection in Background VIs located on the PSP Variable palette. In this case, the buffer size is always in units of elements, and both reads and writes are buffered using the same buffer size setting. Finally, for buffering to be effective, you also have to enable buffering on the server. For Shared Variables, this is set on the properties page when you right click on the Variable item in the LabVIEW project. For items published through the DSTM protocol, this is set in the DataSocket Server Manager. For predefined items, you can uniquely configure the maximum size in bytes and packets for each item. Non-predefined items all share the same configuration for the maximum size in bytes and packets. In either case, the default value is 26214400 bytes and 1 packet so double check to make sure the number of packets is larger than 1 if you want to enable buffering on the client connections and have it be effective. You can get away with no buffering on the server or the writer if you're using synchronous writes (timeout > 0) that complete before timing out. However, if you're using asynchronous writes, you should at a minimum enable buffering on the server if not both the writer and the server.
Some final parting shots: If you end up going down the road of trying to write your own buffer object, you might want to look at <LV DIR>\examples\general\globals.llb\Smart Buffer (DBL).vi. I think it might already support the functionality you were trying to implement at one time. If not, it should be a pretty good starting point. If network streams provide what you want aside from the client/server aspect, you could try writing your own server that you deploy that negotiates with clients and then gives them the information they need to establish the peer to peer connection for the network stream. I realize it's probably more work than you wanted, but it might still be better than the other options in front of you. Something to think about...
11-30-2010 12:33 PM - edited 11-30-2010 12:34 PM
The smart buffer is definetely what I was looking for to handle data assimilation. I understand not wanting to monitor waveform timestamps to check for assimilability (made that up), but why not give the option to read all for anything that buffers? Take out the Y values, assimilate that, then put the waveform back together on the output (which is what I did on my buffer, but at this point I'm not sure it was the best it could be). Just add a read all option to any buffered read/write, if the data is not continous, leave that up to me to decide to not check the box to assimilate.
The project that originally bore these questions is finished to the point that I'm ready to send it down the road. Or perhaps I'm ready to send it down the road, and so therefore it is finished. Either way, I won't be using the smart buffer on it, but would be curious how much more time the smart buffer functionality would require to execute versus a regular FIFO. Maybe I'm unusual or just uninformed, but I can see a use for the ability to read all in almost every acquisition system I've put together.
Edit: You mentioned the ability to collapse in Network Streaming, which I think is new to 2010. I'm still on 2009 but will investigate that when I upgrade. Thanks again for the input.
11-30-2010 05:08 PM
@reddog wrote:
...This isn't particularly useful for scalar data types, but it becomes more important for data types like arrays, strings, clusters, and waveform data types. For these data types, an individual element (or packet) can be very small (1D array of size 2) or very large (1D array of size 1,000,000). In this case, it's often more convenient to limit the maximum size in terms of bytes rather than packets.
Why? Isn't it the other way. I mean, since an individual element can be very small or very large, isn't it more convenient to just say, "I want to buffer 150 elements, no matter how big or small they could be" and resolve it?
I am just trying to understand the things.
In my case, I am sending and receiving continuous sound data (both things happening at both sides - exactly like in a voice chat/call) - the parameters of configurations are :
Datatype: Array of Double Waveform
No. of Samples per Ch: 25000
Sample rate (Samples/second): 15000
No. of Channels: 1
Bits per sample: 8
This is the optimal configuration I have achieved somehow through experiments - there is a delay of about 2-3 seconds, but no breaks/loss/overlap/noise etc.
In the Datasocket Open, I select "BufferedRead" and in the DataSocket property (for reading variable reference) I select "BufferMaxPackets" at 150, and in the "BufferUtil(Packets)" property, it was showing like 36% utilization only. So why that delay? In simultaneous loops I send and receive image data also (from webcams) and the images (using only 1 buffer packet) are very "real time" as compared to the sound data. Why so when the buffer is still there to be used?
On the other hand, while creating shared variables programmatically, I have faced another issue of setting buffer values programmatically, for which I have created a separate thread.
In the current discussion, it would be great if you could guide me for setting the right buffer values in my case.
Thanks ahead!
Regards,
11-30-2010 08:45 PM
If you're happy with viewing things in terms of packets, then just make sure the max bytes is set to the maximum value for the data type and that the max packets is set accordingly. In cases where you're dynamically generating packets which may vary in size or if you're element type is complex (nested cluster where you're using array, strings, and scalars as elements of the various clusters), it becomes very difficult to determine how much memory your client connection is going to consume. In these cases, some people like the peace of mind knowing that they're client connection isn't going to consume more than X bytes of memory.
From the rest of your post, it sounds like data is getting buffered the way you want except that the latency is longer than what you expected based on similar results you're seeing for a non-buffered connection. Is that correct? If so, it's going to difficult to say why that is without seeing your code. For instance, what are the data types of the two points in question? How do you measure the 2 - 3 seconds of latency? Is that the time to retrieve all of the data or just to receive the first packet? Regardless, you shouldn't confuse buffer utilization as an indicator for latency. Buffer utilization on the reader is simply an indicator of how well your application is keeping pace with the network process that's publishing the data from the server. In this case, your results seem to indicate the application containing the reader is keekping up fairly readily with the data being sent to it. This means the bottle neck could be:
Again, without seeing your code, it's hard for me to say. I can only point you in the broad directions listed above.
12-02-2010 02:54 AM - edited 12-02-2010 02:57 AM
@reddog wrote:
If you're happy with viewing things in terms of packets, then just make sure the max bytes is set to the maximum value for the data type and that the max packets is set accordingly.
I thought if I mention two values, it will be conflicting with each other. I thought they are "either ... or" values. Is it not?
@reddog wrote:
In these cases, some people like the peace of mind knowing that they're client connection isn't going to consume more than X bytes of memory.
Yes, sounds reasonable too. I thought like this in the beginning, but then "packets" became more attractive.
@reddog wrote:
From the rest of your post, it sounds like data is getting buffered the way you want except that the latency is longer than what you expected based on similar results you're seeing for a non-buffered connection. Is that correct? If so, it's going to difficult to say why that is without seeing your code. For instance, what are the data types of the two points in question? How do you measure the 2 - 3 seconds of latency? Is that the time to retrieve all of the data or just to receive the first packet? Regardless, you shouldn't confuse buffer utilization as an indicator for latency.
The two data types are - Array of Double Waveform (for sound) and String (for images) and the images consume no buffer for transmission. There is no latency. So in comparison to seeing one's face, I see the sound coming much later (like badly synchronized dubbing/lip-synching). So this is how latency in sound data is bothering. Yes, I also agree that the problem is not the buffer "amount" since there is plenty of buffer space still available. Perhaps some other parameters in buffer (may be I am confusing number of waveforms and number of arrays).
I am not very much fluent working with waveform datatype. So looking at my code can you suggest me something about the buffer parameters and also configuration for sound capture and sound play.
The attached zip file has two projects - 1. with a publisher library (server) which has the publisher shared variables.
2. Subscriber project, with client (reader and writer) programs and a library which has a single bound variable aliasing to the publisher shared variable.
So, for perfect testing, you need 3 computers, one where you will deploy the publisher library. And after deploying, as you'll know the IP address of your system, you need to bind the subscriber library's "bsv_chat" variable to the publisher's "sv_chat" variable.
In the subscriber project, the two files are accompanied by the library. In the programs, I write deploy library from a "data" folder where the library will stay after compilation. If for you the settings are different, or if you prefer to run directly from project explorer, you may have to deploy that library directly from the home folder.
I hope you can make a test and let me know what are the faults there.
Thanks again for your time and efforts.
Edit: Oh, and I also suspect that the sound latency is about CPU %. Because when I interchanged the two programs A and B, the result was still the same, the computer receiving sound data lately was still getting it like that. so the programs have nothing different. The computers looked culprits. But still, perhaps the parameters could make a difference.
12-03-2010 11:32 AM
The max packets and max bytes settings will both have an impact. It's not simply one or the other that take effect. For instance, say max bytes = 100 and max packets = 10. If each packet is less than 10 bytes, then the max packet limitation will kick in, and you'll be able to enqueue 10 elements before the buffer says it's full. However, if the first two packets are 50 bytes each, then you'll only be able enqueue 2 elements before the buffer says it's full. No matter what, you'll always be able to enqueue one element regardless of the max bytes setting.
In regards to the rest of your post, I think I understand some of your problems better in regards to latency. From what I could infer, it looks like you're having trouble syncing your audio and video streams such that the audio matches the image you're seeing on the screen. Here are some comments after looking at your code. Keep in mind that I didn't have the Vision module installed or a microphone or video camera available so I couldn't test any real world conditions. I was also testing everything on a single computer so I wasn't seeing the effects of going across a network. Regardless, I hope some of the comments will be useful.
12-06-2010 06:34 PM
Hi again,
So according to your explanation, minimum{BufferMaxByets, BufferMaxPackets} will be effective. Had the same fear, that's why I thought to put only one of the values 😉 but as you said, I must put both of them. Seems bad (because I will always have to guess the biggest possible option in both of them).
From your points after reviewing my code -