LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

.NET property node is (relatively) slow

Hello everyone.

In my application, I am detecting pulses. The pulses read out by my hardware is represented by a .NET object which contains channel, time, error, and other information. This information I extract with a property node, and put it in an array of clusters, see the attached image.

This process is greatly slowing down the speed of which pulses can be read out. I have now reached a limit of 33kHits/s, and I can see from the Performance and Memory tool that it is in fact this very process of getting the Time and Channel information that is the culprit, as you can also see from the screenshot I have attached. From this we see that processing a 1000-element array of HPTDCHit .NET objects takes on average 13.5 milliseconds. Naturally, I am using in-element structures in order to prevent reallocation of arrays.

 

33kHits/s is roughly one tenth of what desire. We therefore turn to the community for help. Are there any ways this process can be sped up? Perhaps there is a lot of overhead with property nodes, and it would be faster to do this process in a block of C code? Any help is greatly appreciated.

Download All
0 Kudos
Message 1 of 7
(3,188 Views)

Hi Blub,

 

processing a 1000-element array of HPTDCHit .NET objects takes on average 13.5 milliseconds.

So the DotNet call (plus additional LabVIEW data processing) takes 13.5µs per item!

I think that's not bad for DotNet handling at all… (I guess that subVI needs only some simple fast math for raw->proper time conversion.)

 

Does your device allow to request/read an array of "hits" instead of requesting single items? This would be the most promising way to reduce CPU load/processing time!

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
0 Kudos
Message 2 of 7
(3,177 Views)

We have little experience with NET objects. 13 microseconds for merely reading two parameters seems excessive. Unfortunately the NET driver that came with the equipment does not support what you describe. We are therefore exploring other ways that the extraction of information from the NET objects can be realized. 

0 Kudos
Message 3 of 7
(3,161 Views)

Hi Blub,

 

13 microseconds for merely reading two parameters seems excessive.

You are talking about communication between separated processes with several layers of software drivers in between, so I still think 13µs is not much…

 

You may cross-check those timing numbers by writing an application using a different programming language. Then compare the results you get there with your current LabVIEW results!

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
0 Kudos
Message 4 of 7
(3,159 Views)

You could try to parallellize the loop and remove the In place structure, as i understand IPS it makes a data lock (kind of like calling a sub-vi) which takes a small time. Also, autoindexing should be minimally faster. This of course depends on whether the .NET is thread secure. 

/Y

G# - Award winning reference based OOP for LV, for free! - Qestit VIPM GitHub

Qestit Systems
Certified-LabVIEW-Developer
0 Kudos
Message 5 of 7
(3,147 Views)

Hmm, this sounds a bit familiar....

 

This is a bit of reality with inter-mixing different technologies. You have:

  • Windows OS. This isn't real-time, it can feel free to change quantum at any time and leave your thread sitting there waiting to continue. You have very limited control over this. Windows was designed for multi-tasking.
  • .NET CLR COM hosting inside LabVIEW. LabVIEW needs to operate on the COM CLR object to interact with the internally hosted .NET objects and then convert the responses back into LabVIEW data types while also indicating that they don't need to be pinned internally.
  • "Reading two parameters" may not be all that those .NET Properties do; they may also execute some sort of internal logic. In fact, there is nothing technically stopping someone from writing an almost entire application in a single property; although this would clearly be outrageous. 

My suggestion is to try executing the entire acquisition (or large parts thereof) in .NET and then allowing the caller (LabVIEW) to acquire an entire buffer of data at less regular intervals.This might reduce the overhead a bit. It does mean creating an additional assembly that performs the calls on your supplied dll and then exposing the buffered data over an interface.

 

Another troubleshooting option, but a bit more work, is to use a profiling tool (Visual Studio would do but there are others) to see how long it is taking to execute the property calls from a .NET CLR perspective. This might high-light where the bottleneck is.

 

Also dumb question - I assume the VI is running with Debug disabled?

0 Kudos
Message 6 of 7
(3,123 Views)

Thank you for all the replies. Yamadeas solution actually worked quite well, at least the idea to remove the in-element structure. This increased the speed of processing a 1000-element array tenfold.

0 Kudos
Message 7 of 7
(3,109 Views)