LabVIEW Idea Exchange

About LabVIEW Idea Exchange

Have a LabVIEW Idea?

  1. Browse by label or search in the LabVIEW Idea Exchange to see if your idea has previously been submitted. If your idea exists be sure to vote for the idea by giving it kudos to indicate your approval!
  2. If your idea has not been submitted click Post New Idea to submit a product idea to the LabVIEW Idea Exchange. Be sure to submit a separate post for each idea.
  3. Watch as the community gives your idea kudos and adds their input.
  4. As NI R&D considers the idea, they will change the idea status.
  5. Give kudos to other ideas that you would like to see in a future version of LabVIEW!
Top Kudoed Authors
Showing results for 
Search instead for 
Did you mean: 

Extend native queues to support network queues

Extend the native LabVIEW queues to support network based queues. We have network streams which are very nice but they are point to point. Queues are extremely useful since they allow a many to one relationship. It would be nice if LabVIEW supported network queues. I have worked with a third party implementation which works reasonably well. However it would be nice if this was native functionality. Network queues are essential if you desire communication between different applications or for sending data from a TestStand environment to a LabVIEW front end GUI. They are quite powerful and would make it much easier for users to implement multi computer applications.


NOTE: I need to post an updated version of the Network Queue class. I have improved and extended this implementation.)


If the idea was "Extend Network Streams to support many-to-one relationships," I'd support it.  I'm not convinced extending native queue functionality to support network communications is a good solution.  Queue behavior doesn't seem to map to it particularly well.  i.e. What does Max Queue Size mean in a network queue and how do you make Enqueue block when the queue is full?


(I haven't looked at your Network Queue class yet.  Maybe you've addressed my concerns there?)


Yes, there are some things minor issues that would need to be addressed such as what is the queue size, how do you preview the queue. I need to do some further investigation to flush things out. However a many to one network stream is not really that different from a network queue. It is actually a restrictive version of one. It enforces order but doesn't allow you to place new data in the front. Network stream also won't allow you to preview the data already buffered. It also doesn't support a lossy queue concept. Other than that it is effectively a queue. You can put data in on one end and pull it off on the other end. Order is guaranteed and data will be buffered when the reader is not pulling the data off the stream.


With today's complex applications it is becoming more necessary to support multi-machine applications which allow a many to one communication mechanism.

Proven Zealot
Proven Zealot

You don't want to mix queues and network streams. Adding the functionality of queues would kill the performance of network streams.


Consider: In order to support multiple senders, you have to have one of the senders that is the central meeting place, and the data gets lined up there. That means that all the secondary senders have two network machines to bounce through. A network queue is equivalent to naming a queue using an IP address and port number. That name presumably would be used as the primary sender. Receivers would do an Obtain using the IP addr/port.


Then, to support multiple receivers, you need the data to stay on the sender, so you cannot take advantage of free spaces in the network traffic while waiting for the receiver to need the next block of data. The receiver, when it tries to dequeue, actually sends a message to the sender to request the next bit of data, so even if there was lots of spare bandwidth during the time when the receiver was processing the last batch, you can't take advantage of that space. Gotta wait for the receiver to request it because you never know which receiver to send to or when a new receiver will join the party. If data wasn't available, there's a weird race condition over the network between more data arriving and the sender deciding to send it to the receiver and the receiver timing out and deciding it doesn't want the data any more. So there's going to have to be some back and forth there for the sender to verify that the receiver actually got the data otherwise it should pick the next receiver and try again.


All that shuttling of data and waiting, etc, is something you wouldn't want to put onto a network stream, which provides very fast point-to-point support.



I wasn't saying that the network queue should replace or change network streams. I was saying to Daklu that from a high level conceptual view they both accomplish the same thing: getting data from point A to point B while preserving order. I agree that both serve there purpose and as you say the performance issue is something you would not want to impose on network streams. However, we should not be limited to only that avenue (a one to one communication pipe) for network based process communications. The network queue has it's uses and it would be something very useful to have in our bag of tricks.

Proven Zealot
Proven Zealot

> The network queue has it's uses and it would be something very useful to have in our bag of tricks.


You and I agree. Sorry for the misunderstanding.


So I'm joining this discussion a little late, but I was curious what use cases people are trying to solve with the requested communication strategy.  Upon looking at the code for the referenced network queue class, it appears you've created what I'll call a synchronous interface to interacting with a block of memory somewhere on the network.  In essence, you create a queue from the server process that handles client requests, and all client requests interact with that same queue in a synchronous fashion.  When I say synchronous, I mean all client requests block until the request completes or errors or times out on the remote end.  I can't asynchronously request to enqueue a piece of data and then have my application continue on.  Instead, I send a TCP message across the network to the process where the actual queue resides and I then wait for a response from that process telling me whether or not the operation succeeded.  Putting it another way, you've basically created a Remote Procedure Call (RPC) wrapper for queues.  While there are probably valid uses for this type of communication, it typically isn't the first thing I think of when trying to do many to one communication across a network since a piece of data read from one client is no longer visible from other clients.  Is this really the functionality you want or was it just the most convenient thing to implement given the tools at hand?


With the mention of network streams, my assumption was that people would like to see some sort of broadcast or "fan out" functionality built into network streams.  In contrast to above, network streams use two blocks of memory on the network and allow the application to synchronously enqueue data on one endpoint while the data is moved asynchronously across the network in the background to the other endpoint.  However, communication is limited from one computer to exactly one other computer.  When creating network streams, we discussed how we might enable 1 to N communication models, but the lossless FIFO aspect of network streams doesn't lend itself well to 1 to N communication models.  For instance, what happens if one computer reading data on the network is significantly slower than the other computers?  Should the writer hang onto the data until it can be read by all subscribers and thus the slowest computer on the network slows everyone down?  One answer to this is that the writer should always send out new data to all subscribers, and new data should overwrite the oldest data on the readers that aren't keeping up.  This destroys the lossless nature of the communication protocol in favor of an easier broadcast mechanism across the network.  However, this is more or less what the Network Variable already does when you enable buffering so it was hard for us to find much value in supporting this type of feature.  In the end, we decided not to try and support this type of use case until we had better customer feedback on the initial feature set and its short comings.


Given all of this, do you still feel like network queues are the best way to solve your intended use case, or is there functionality that you would like to see from Network Streams or Network Variables that would better solve the problem?


Actually the code I posted does work like a normal queue. A client can post something and then go on its merry way to process other stuff. I had the same thought you did when first looking at the code. However I created some test apps to see if that was the case and did verify the code does indeed act like a queue.


The use case for this is to create a distributed application. And yes, a network stream is a one to one relationship and the queues bring a one to many relationship to the table.


Another use case is to create communications between the GUI and a TestStand execution. At present the communication between the two is cumbersome and native queues do not work.