LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Data Socket or Shared Variable FIFO

Just when I thought I had a handle on things, I'm back to running in circles again. I need to transmit lossless data from one LabVIEW2009 computer to another. The reader is going slower than the writer, so I need to buffer data somewhere, and I need to make sure that the buffer is FIFO and not cached i.e. once I read the data, it is no longer in the buffer. Someone else has a long post that deals with this, but it seems much more involved. 

 

I am currently thoroughly confused by datasocket, and transferring lossless data over a network in general.  I keep reading that DataSocket can't buffer, but I can choose to do a buffered read/write?

 

The DS server is running on my writer. I open a buffered read/write here and set max packets and max bytes to 4000 to start with. I open a buffered read on my reader and also set max packets and max bytes to 4000. I started with DSTP in my URL, and see no buffering. I am writing 250 points per write, and should be getting about a 1000 points per read, but only get 250. So I switched to PSP instead of DSTP on my data socket open on both sides. This requires me (I guess) to create shared variables in my project and reference the data socket open to this library. I do this, still no buffering. I change the shared variables in my project to buffer 4000 points per waveform. Still no buffering? So I switch to Shared Vars entirely (no datasocket), writing to a node on the Write Computer, and reading with the Shared Variable VI's on the Read computer. Still only 250 points per read?

 

Here is another point of confusion. What is the difference between these three processes:

1. Create a shared variable and drop a node onto the block diagram

2. Use the Open, Read, Write VI's inside of Data Communication>Shared Variable on the block diagram

3. Use Data socket VI's with psp in the beginning of the URL to Read and Write Shared Variables

 

 SO, I would like to understand all of this, but more than anything, I want to transmit lossless data across the network with a FIFO buffer that is definetely not a cache? OH, and unless its necessary, please don't throw in the STM VI's to confuse me even more.

0 Kudos
Message 1 of 25
(8,989 Views)

Hi deskpilot,

 

I think the "long" thread about all these topics is this one.  I do recommend you read through at least some of it to understand the different behaviors of shared variables through different APIs (the 3 "processes" you mentioned in your post). I myself learned a lot of useful information from that thread. 

 

To answer some of your questions,  it is possible to do a buffered transfers using Datasocket. It would involve something like this (second VI snippet in that post). 

The relationship between the datasocket terms (max packets, max bytes, etc) and shared variables is outlined like this: 

 

BufferMaxPackets and Shared Variable "elements" are synonymous for data types for all data types except though that do not support the BufferMaxPackets property (variants and large clusters).

 

You may want to check the settings for your Datasocket item that you're writing to: 

 example.png

 

 

 To shed some light on differences between 1, 2, and 3 in your post: 

- 2 currently does not support buffering in LV 2009

- you can use both 1 and 3 to do buffered transfers, but there are some important differences between how they operate (outlined in the thread)

 

Hope this helps. 

Misha
Message 2 of 25
(8,982 Views)

I've read that other post a couple of times, but I've still got problems. I checked the DataSocket server manager and set everything to maximum size. I was actually defining the variables at run time, and the default settings for that were already set large. I tried defining them in the manager, but see no change.

 

I am still confused if it matters whether I use dstp versus psp? If I use PSP, I don't think the ds manager is involved? Either way, no matter what I do, I cannot get anything to buffer. Has anyone successfully buffered 1d array of double waveform, and if so can you show me how? I am going to build some test VI's, but I suspect there is a problem with buffering 1d array of double waveform. If there is, my entire project is in tatters and I'm going to hide all of the sharp objects from myself very quickly.

Message Edited by deskpilot on 03-10-2010 12:35 PM
0 Kudos
Message 3 of 25
(8,976 Views)

Before going too far down this road, it should be noted that neither the Variable or DataSocket APIs provide lossless data transfer.  They do support buffering of data, but the buffering strategy doesn't quite behave the same way as most FIFO implementations.  For instance, when the buffer fills up, most FIFOs will not accept new data and will preserve the data that has already been written.  However, the buffering strategy used by the Variable and DataSocket will overwrite the oldest data with newer data as the buffer becomes full.  In essence, it behaves as a "FIFO with Overwrite".  I believe both the DataSocket and Variable API will generate a warning when this happens, but it won't prevent data from getting overwritten.  If you know your application is always going to be able to keep up with the data transfer, then the buffering behavior effectively acts as a FIFO.  If it doesn't keep up, there is nothing built into the Variable or DataSocket API that will prevent you from losing or overwriting data.  If this acceptable to you, then the Variable and DataSocket features may be appropiate tools for you to use in your application.  If not, you'll probably have to use the TCP primitives to build your own flow control for the data exchange between the two applications.  

 

When talking about the Variable and DataSocket features, I think many people forget or fail to understand that these communication tools are client/server based architectures where each piece has its separate configuration.  Before you can do anything, you must first create a published item in the server.  This published item can be configured to be a single value or a buffer of values that uses the buffering strategy described above.  For DataSocket, you would typically use the DataSocket Server Manager to create new published items and to configure buffering options.  For Variables, a new item is created in the server by creating a new Variable within the project and deploying it.  In this case, the properties dialog on the Variable item in the project controls the buffering options.  It should also be noted that Variables are automatically deployed when running VIs in the project which include the Shared Variable node since they are statically bound to a single item published from the server.  If you are using the programmatic API from the palette (the Open, Close, Read, and Write VIs), this deployment step doesn't automatically occur since there is no static binding to a single published item.  Also, this automatic deployment step only occurs when running through the IDE.  When it comes to deploying a built application, you will need to make sure you account for the deployment of the server configuration or your application will fail to work.

 

Once you've deployed your published items to the server, you now need to create a client connection to them.  This client connection also has it's own buffer settings where the buffering strategy agains behaves as described above.  When using DataSocket, the buffer settings are controlled through the mode input on the Open VI and various properties in the Property Node.  Each instance of the Open VI corresponds to a new client connection to the server.  When using the Shared Variable node, the server and client buffer configurations are tied to the same configuration within the properties dialog (in effect the client and server buffer settings are always the same).  Also, when using the Shared Variable node, you get a unique client connection for each instance of the node on the block diagram.  As mentioned previously, there currently is no support for client side buffering when using the programmatic Variable API.  We hope to add support for client side buffering in a future release.

 

In general, if you enable buffering on the server, you're also going to want to enable buffering on the client connection.  There may be times where this isn't always true, but in general you're probably not going to get what you expect if you don't enable buffering on both the client and the server.  Hopefully this information helps clarify some things for you in order to determine whether or not these features are appropriate for your application.

Message 4 of 25
(8,968 Views)

"For DataSocket, you would typically use the DataSocket Server Manager to create new published items and to configure buffering options" I'm not sure why this would be typical. Seems just as easy to programmatically create the variable name and configure the buffer. I'm not sure why you would ever need to open the DS server manager, although I have tried every combination of using it versus not.

 

"In essence, it behaves as a "FIFO with Overwrite"."  Yes, I am okay with a FIFO with overwrite as I am confident that I could pull data out of the buffer fast enough if the buffer worked like I thought. However, my example that I built shows me two things. I am using DataSocket read and write with PSP and DSTP in the URL.

 

1. DSTP wil buffer numbers, but it will not, in any way, buffer 1d array of double waveform

 

2. PSP will buffer the waveform, but not in the way that I expected.

 

Each waveform that gets written to the FIFO is about 250 points. I assumed that if I read the buffer and it had 4 sets of those writes in it, I would get a 1000 point long waveform on the read. I do not. I would have to read it four times. I at first thought this was assinine, because when I queue data, it reads all available data out of the queue, right? Wrong ... never knew that.

 

So, I guess the only way to do what I want is to use my own homemade waveform holder (FIFO). I initially thought I would have to use this on the writer and send large packets at the pace the reader wants them. In doing this, I would either have to have feedback telling me when to change the size of the packets to match the readers speed, or simply let the PSP FIFO fill up and empty it out when I am done. Then I would have to worry about the fact that it is a FIFO with overwrite.

 

I would really rather have my waveform building FIFO on the reader, though, because it is ultimately ancillary to the writer, if only slightly. This way the writer could not get bogged down with an enormously lengthy waveform in the event of a reader slow down. I am thinking I will call the waveform building FIFO VI from the Main Reader to run unopened, and then communicate with the Main Reader via Functional Global? Not sure exactly what that will look like yet, because what I have so far would be more like a cache.

 

It sure would be nice if the FIFOs, whether it is a queue or a DataSocket or PSP server, could be configured to build the waveforms rather than storing them as individual packets.

Message Edited by deskpilot on 03-10-2010 02:52 PM
Message Edited by deskpilot on 03-10-2010 02:55 PM
0 Kudos
Message 5 of 25
(8,959 Views)

I put together what I am calling an assimilating FIFO. As far as I can tell, this builds the waveform lengthwise until the FG is called in the main VI with the enum set to read data. At this point it clears the buffer and starts over. I would be greatly appreciative if someone could look at this and verify, as I am having trouble proving it to myself.

 

There are a couple of nuances, I'm not sure yet if they are problems. I am calling Waveform FIFO AE by an invoke node before I start the main program that calls the FG to Read Data. The read is in a loop that runs with 250ms wait. The write, as you can see in the VI, has a 50 ms wait, but sometimes takes up to 200 ms to run. I am not sure if this is resulting from the datasocket read, the Main loop blocking during an FG read, or the fact that the inconsistent read/write sequences allow the waveform to get up to 5000 points before the buffer is cleared.

 

Usually in the past when I get huge processor consumption like this setup exhibits, it is due to a rookie mistake, but I'm not sure that is the case here. I would be greatly appreciative if someone can point one out though. I tried initializing the array inside of Wavefrom FIFO AE without any improvement, but at the time I did not realize the array was getting to be 5000 points. Also, I could not figure out how to initialize it without destroying the FIFO functionality. Since I do not know how big the array will be, and it is in fact always changing, I don't think I can initialize it without corrupting the data stream with padding zeros. And yes, alarms do go off when I read that last sentence, just not sure what to do about it.

 

 

So, is there a way to prevent the bad practice of building an array inside of a loop in this case? And also, is there a way I can synchronize the read/write of the FG between the loop at 50 ms wait and the one at 250ms wait? Meaning: Can I make sure that I get five writes followed by one read? Keep in mind the 50 ms wait is running in the background and being called by invoke node. Not sure this is necessary, just didn't want another loop on the main diagram and didn't think it would make a difference since they are communicating by FG anyway.

 

Edit: Just realized I need something in the AE to prevent the read from performing a Read Data while the buffer is cleared, will be adding that.

Message Edited by deskpilot on 03-12-2010 07:34 AM
0 Kudos
Message 6 of 25
(8,937 Views)

Okay, so that pretty much doesn't work at all. The read data almost always occurs between read buffer and write, so there is a bunch of redundant data and the reason why the waveform was getting so long. Back to the drawing board.

0 Kudos
Message 7 of 25
(8,933 Views)

May just be talking to myself here, but I think this fixes it. Just need to know if the read data occurred between read buffer and write to buffer, if so, only write the newest data to the buffer. Now it looks like it works, the length of the waveform stays around 300 - 500 points per channel. This corresponds to about 3 data socket writes per Read Data which works for me. Still getting really close to pegging the processor, but this is a 1.8 GHz single core machine, so may be better on a real computer. It is a much more conistent processor usage, though, so I have some confidence in it.

Message Edited by deskpilot on 03-12-2010 08:41 AM
0 Kudos
Message 8 of 25
(8,929 Views)

Hi,

Probably seems like you found a workaround to your issue. But just to understand the problem better - why did you say that waveform data cannot be sent through Datasocket?

I use Datasocket to stream videos (captured from webcams) and it works ok. I flatten the video to string while writing to Datasocket and when reading it from DS, I unflatten the string to video. I am not sure why you cannot send a double waveform steam. I couldn't open your files as I still didn't install LabVIEW 2010 on my computer. Still working with 8.6.

 

Reddog and Mishkin gave good explanations. 

 

But I am not sure how the "packets" are defined. So far I have used Datasockets. And have implemented SharedVariables as well, but just yesterday started using Shared Variable PSP URL in Datasocket Open VIs. And it looks like the things work well, as I wanted, because my DS Writer didn't support multiple write operations simultaneously on the same item. Simultaneous write operations are possible if they don't happen exactly at the same time - somehow the TimeOut value in DS Write doesn't work as it should. For this mystery, I started another thread, but didn't get a convincing answer about why one DS Write will not wait till the Time Out before giving an error that the DS item is busy.

 

Bur for now, any explanation about packets?

Vaibhav
0 Kudos
Message 9 of 25
(8,463 Views)

I think there are two questions that I can answer from the above post. First, I did not say, "waveform data cannot be sent through datasocket." I said that double array of waveform cannot be buffered using DSTP protocol in datasocket. You can send one packet, but it will not buffer. Not sure why.

 

That leads to the second question, what is a packet? Here is the way I understand it. I will use my double array of waveform acquisition as the example. The loop that houses the DaqMx Read is running at 20 HZ nominally. The acquisition is running at 1000 Hz. This means that every time the Read executes, there are approximately 50 data points per channel waiting to be transferred into memory. This is the packet. It contains fifty data points for each channel being collected. If the acquisition consists of one channel, a packet will have fifty data points in it.

 

You can hold onto packets and merge them before putting them into the buffer, whether it is a queue, a shared variable, a datasocket node... After glancing at your other post, it is something you may want to consider. I didn't look at your code, but it sounded like Ben was assuming that you weren't buffering data, and you were assuming that you are buffering data, not sure who is right.

 

In my case, merging packets before putting them into the FIFO had the potential to cause problems, so I chose to put the packets into the FIFO individually on the (writer/producer/server), and then merge them on the (reader/consumer/client) before operating on them or putting them into a file. This is at least marginally better than just reading out of the NI FIFO into the Write File because I can merge data faster than I can write it. If the producer is putting packets into the FIFO twice as fast as I can pull them out and write them to file, then the FIFO will eventually overflow. But if I can merge two or more packets in the time it takes to write the one new, larger packet, then the consumer will keep up.

 

 

0 Kudos
Message 10 of 25
(8,445 Views)