03-16-2017 04:06 AM
Hello everyone.
In my application, I am detecting pulses. The pulses read out by my hardware is represented by a .NET object which contains channel, time, error, and other information. This information I extract with a property node, and put it in an array of clusters, see the attached image.
This process is greatly slowing down the speed of which pulses can be read out. I have now reached a limit of 33kHits/s, and I can see from the Performance and Memory tool that it is in fact this very process of getting the Time and Channel information that is the culprit, as you can also see from the screenshot I have attached. From this we see that processing a 1000-element array of HPTDCHit .NET objects takes on average 13.5 milliseconds. Naturally, I am using in-element structures in order to prevent reallocation of arrays.
33kHits/s is roughly one tenth of what desire. We therefore turn to the community for help. Are there any ways this process can be sped up? Perhaps there is a lot of overhead with property nodes, and it would be faster to do this process in a block of C code? Any help is greatly appreciated.
03-16-2017 04:27 AM - edited 03-16-2017 04:28 AM
Hi Blub,
processing a 1000-element array of HPTDCHit .NET objects takes on average 13.5 milliseconds.
So the DotNet call (plus additional LabVIEW data processing) takes 13.5µs per item!
I think that's not bad for DotNet handling at all… (I guess that subVI needs only some simple fast math for raw->proper time conversion.)
Does your device allow to request/read an array of "hits" instead of requesting single items? This would be the most promising way to reduce CPU load/processing time!
03-16-2017 05:07 AM
We have little experience with NET objects. 13 microseconds for merely reading two parameters seems excessive. Unfortunately the NET driver that came with the equipment does not support what you describe. We are therefore exploring other ways that the extraction of information from the NET objects can be realized.
03-16-2017 05:13 AM - edited 03-16-2017 05:13 AM
Hi Blub,
13 microseconds for merely reading two parameters seems excessive.
You are talking about communication between separated processes with several layers of software drivers in between, so I still think 13µs is not much…
You may cross-check those timing numbers by writing an application using a different programming language. Then compare the results you get there with your current LabVIEW results!
03-16-2017 05:55 AM
You could try to parallellize the loop and remove the In place structure, as i understand IPS it makes a data lock (kind of like calling a sub-vi) which takes a small time. Also, autoindexing should be minimally faster. This of course depends on whether the .NET is thread secure.
/Y
03-16-2017 01:45 PM - edited 03-16-2017 01:50 PM
Hmm, this sounds a bit familiar....
This is a bit of reality with inter-mixing different technologies. You have:
My suggestion is to try executing the entire acquisition (or large parts thereof) in .NET and then allowing the caller (LabVIEW) to acquire an entire buffer of data at less regular intervals.This might reduce the overhead a bit. It does mean creating an additional assembly that performs the calls on your supplied dll and then exposing the buffered data over an interface.
Another troubleshooting option, but a bit more work, is to use a profiling tool (Visual Studio would do but there are others) to see how long it is taking to execute the property calls from a .NET CLR perspective. This might high-light where the bottleneck is.
Also dumb question - I assume the VI is running with Debug disabled?
03-17-2017 08:35 AM
Thank you for all the replies. Yamadeas solution actually worked quite well, at least the idea to remove the in-element structure. This increased the speed of processing a 1000-element array tenfold.