From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Are clusters thread safe?

If one thread assigns a new value to a cluster while another thread reads it via a local variable, could the reading thread get part of the old value and part of the new value? Or are cluster operations atomic?
 
Does this behavior- whatever it is- apply to cRIO FPGA and controller platforms as well as desktop systems? Does it apply in multiprocessor systems?
 
Thanks,
 
-Ron
0 Kudos
Message 1 of 9
(3,844 Views)
with any data type, not just clusters, if you are changing the data in one thread and reading it in another via local variable, there is always a strong possibility of a race condition occuring. A race condition occurs when one thread is writing and another thread is reading at the same time. It is the source of many irritating debugging problems. Instead of local variables between threads, do a search for Functional Globals, which are effectively a subvi with an uninitialized shift register, allowing you to store a value and read the value. Since the vi should be set to re-entrant, reading and writing can't occur at the same time.
Jeff


Using Labview 7 Express
0 Kudos
Message 2 of 9
(3,833 Views)

To add to Jeff's explanation and to answer the actual question - you need to understand that what you have is not a variable, but a UI control. When you have a UI control, writing to it or reading to it will be done in one atomic operation, so reading from the control you won't get half of the old values and half of the new values.

However, as Jeff mentioned, this is a good way to create race condition and is usually a sign of trying to use LabVIEW as you would C. There are better ways to handle data.

As a side issue, you can't have two threads in the same VI unless you use timed loops and explicitly set them to separate thread, so you can't use locals in separate threads.


___________________
Try to take over the world!
0 Kudos
Message 3 of 9
(3,821 Views)
What Jeff said is true, but the race conditions that happen have to do with whole copies of the data being made. To answer your question, it is not possible that you can read from a local variable part of the old cluster value and part of the updated cluster value. Nevertheless, look into better ways of sharing data like Functional Globals (search this forum for that term or Action Engine) or single-element queues. Both are better ways to work with data from multiple threads.
Jarrod S.
National Instruments
0 Kudos
Message 4 of 9
(3,819 Views)


@tst wrote:

As a side issue, you can't have two threads in the same VI unless you use timed loops and explicitly set them to separate thread, so you can't use locals in separate threads.




Just to nit-pick, that's not necessarily true. There's no such hard rule regarding threading. Two while loops with Wait VIs in them will usually produce two threads. It's ultimately up to LabVIEW, though, to decide between multi-threading and multi-tasking.
Jarrod S.
National Instruments
0 Kudos
Message 5 of 9
(3,815 Views)


Jarrod S. wrote:

Just to nit-pick, that's not necessarily true. There's no such hard rule regarding threading. Two while loops with Wait VIs in them will usually produce two threads. It's ultimately up to LabVIEW, though, to decide between multi-threading and multi-tasking.
So you can have several threads (4, if memory serves?) in each execution system and LV can do that even in the same VI?
Nice, I didn't realize that, but I have to admit I never really delved into this matter.

___________________
Try to take over the world!
0 Kudos
Message 6 of 9
(3,794 Views)

"...and LV can do that even in the same VI?"

I've seen it myself. Take a giant array, split into eight parts, feed each to its own for loop and if you have enough cores, LV will split the threads across multiple cores.

"Threadconf" (sp?) will let you define the number of threads.

The article on hyperthreading touches on the topic of how the code is "chuncked-up".

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 7 of 9
(3,786 Views)

tst wrote:

When you have a UI control, writing to it or reading to it will be done in one atomic operation, so reading from the control you won't get half of the old values and half of the new values.

However, as Jeff mentioned, this is a good way to create race condition and is usually a sign of trying to use LabVIEW as you would C. There are better ways to handle data.


Here's a made-up example. I'm not interested in a specific solution to this problem; I'm interested in the general question of clusters' atomicity in multithreaded or multiprocessing environments.
 
Suppose I have a thread that monitors a serial GPS unit and derives two values: a floating point time correction, in seconds (GPS time - platform time), and a boolean indicating whether the GPS unit is providing a good signal, i.e. whether the time correction is valid. This thread stores these values in a cluster using Bundle by Name, so in theory both values are stored at the same time. This thread writes to the cluster, but does not read it.
 
The other threads (or however LabVIEW divides up the processing) read this cluster using Unbundle by Name, so in theory each thread reads both values at the same time. These other threads do not write to the cluster. It is not critical that all the threads read the same values from the cluster, but it is critical that each thread reads the boolean that agrees with the float it reads, i.e. that the thread doesn't read an invalid time correction and think it's valid.
 
It sounds like you're saying that this does in fact work, even if the threads are running on different cores. Is my understanding correct? I realize I can play it safe and use a semaphore. I'm just wondering if I need to.
0 Kudos
Message 8 of 9
(3,766 Views)


@RonW wrote:


 
It sounds like you're saying that this does in fact work, even if the threads are running on different cores. Is my understanding correct? I realize I can play it safe and use a semaphore. I'm just wondering if I need to.



You are correct. This is completely safe. LabVIEW ensures thread-safety in this case by making complete copies of the cluster controls data any time you access it with a local variable. In the example of a boolean and a numeric, this doesn't really matter. But when clusters start to get bigger and more complicated (clusters of arrays of clusters...), then these deep copies become more expensive.

While it is safe to do as you have been doing, it is more efficient to use something like a single element queue. You use single element queues by dequeuing data to access it and then enqueuing it when you're done. There are two advantages here:
  1. When you dequeue an element from a queue, you get that exact memory, not a copy of it. Similarly, when you enqueue something, you generally put that exact piece of data into the queue, and not a copy of it. Now if you continue using that wire's data after you enqueue it, then LabVIEW will have to make a copy of it. But if it's a dead end for the wire at the Enqueue function, then there shouldn't be a copy.
  2. You ensure serialized access to data. This isn't very important for you in your example, because you have one writer and multiple readers. But if you had multiple writers, this would be necessary to ensure that one writer's changes don't overwrite anothers.
Jarrod S.
National Instruments
0 Kudos
Message 9 of 9
(3,757 Views)