04-14-2008 09:56 PM
04-15-2008 01:34 AM
The sequence itself does not add any real overhead, but I seem to remember cases where the compiler handled wires going into a structure differently than wires placed on the same diagram and so created data copies where none were "needed". This doesn't change the behavior, but it does cause a performance hit. This should be visible if you use the buffer allocations display.
If your program is *very* time critical, you might wish to run it on a real-time OS or an FPGA and maybe get an expert consultant to review it.
04-15-2008 07:46 AM
Adding more words to what tst said (but still saying the same thing
).
The seq structures are only used by the compiler to control the order in which things are done, BUT the order in which things are done can change performance. I'll try to describe a simple example.
Lets say you have an array and you want to do two thing to it, 1) Inspect the first element (index =0 ) and 2) you want to replace element number two (index = 1).
If the element is replaced in an early frame and the first value is checked in a latter farme, the original array must be copied (for use in the replace) so that the original array contents are still available for the inspection.
Conversely, if the first element is inspected BEFORE the replacement, the data can reside in a single buffer since the non-distructive operation (inspecting) will be performed before it is modified.
These situations are characterized by the wire branching.
Ben
04-15-2008 11:16 AM
I am actually going to use a Real-Time OS, I forgot to mention that. Im using a Desktop PC as a Real-Time target. We have to get timing down to 1/40000 of a second between data manipulation and output intervals, so a non-real time target is just not feasible. I am using the flat-sequence structures to build my algorithm for the reason that the Real-Time does not accept m-files or MathScripts, which is irritating but understandable. If I don't get what I need out of just trying to convert the math algorithm to LabVIEW using sequence structures, then I may look into DLLs.
Thank you Ben, I already fully understand the importance of the order of operations. It actually has a lot to do with the Translation Look-aside Buffer. The TLB is where most the speed comes from for computers. It's best to keep operations on data structures together because there is only so much space in the TLB. Whenever you go to access a page in memory, the TLB looks to see if it is in its table. If not, it goes to the page table. If it's not in there, it page faults and goes to disk. This takes a considerable amount of time versus when something is already loaded into the TLB. So once something is loaded into the TLB, it's best to go ahead an do all the needed operations on that set of data before it is kicked out of the TLB. Just elabortating on what you said I guess, but I do understand that. Thanks!
04-15-2008 12:29 PM
"We have to get timing down to 1/40000 of a second between data manipulation and output intervals, ..."
With those kind of req's it is wise to be careful. I know that those types are loop rates are possible but one wrong wire could throw off your performance. Let me suggest that you "bench-mark early benchmark often." Its a lot easier to find "needles" in a small "haystack" than a large on.
Ben
04-16-2008 12:40 AM