LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

faster performance with cins or dlls

Hi,

I currently have a program in labview for parsing bit streams from a digital i/o board in a pxi chassis, but it is quite slow. Does anyone know if much higher performance can be gained by using cins or call function nodes and calling C code to do the bit stream parsing. And if so, which is faster.

-Tim
0 Kudos
Message 1 of 18
(3,337 Views)


@Timothy 123 wrote:
Hi,

I currently have a program in labview for parsing bit streams from a digital i/o board in a pxi chassis, but it is quite slow. Does anyone know if much higher performance can be gained by using cins or call function nodes and calling C code to do the bit stream parsing. And if so, which is faster.

-Tim


There is a lot to say about LabVIEW performance but most of it is not that LabVIEW is per se slow at all. The problem with programming in LabVIEW is that it is very versatile and quite simple to program and very easy to write vastly inefficient code if you do not think a bit before starting to draw wires. With traditional languages this problem is not as apparent since it is much harder to write a working algorithme unless you do some serious planning and thinking beforehand.

In most cases a LabVIEW algorithme can be made to be close to the performance you would get with a well implemented algorithme in C without using advanced C compiler optimization. An area where LabVIEW usually is a bit hard to program efficiently is with heavy bit manipulation. You can get surprising performance even there but the code does look sometimes not as simple as you sometimes have to do a few tricks to outsmart the automatic optimizer in LabVIEW.

Basically any algorithme that does rebuilding arrays in loops by appending or inserting data to the array is an implementation that will suffer performance in LabVIEW since LabVIEW uses completely dynamically allocated memory for all data. Most of these algorithmes can be made super fast by using the autoindexing feature on the border of LabVIEW loops. This is in fact the most common reason for performance complains, by using a Build Array or Concatenate String function inside a loop instead of autoindexing that array at the loop border.

As to the difference between CIN and DLL. There is basically no speed difference. A CIN is in fact a DLL too on Windows but one with a very specific exported interface. However CINs are legacy technology and there is a very big chance that as NI adds new platforms to the LabVIEW family the CIN mechanisme won't be ported to those platforms anymore. If you start programming external code now, use shared libraries. They are the prefered method for this, well supported and won't go away soon. CIN's are only useful if you have old projects that used them already and you need to upgrade to newer LabVIEW versions or platforms and even then I would highly advice to reconsider porting them to shared libraries instead.

Rolf Kalbermatter

Message Edited by rolfk on 01-15-2007 10:24 AM

Rolf Kalbermatter
My Blog
Message 2 of 18
(3,318 Views)
There is an important performance difference between CINs and DLLs.  CINs always execute in the UI thread.  This means a swap to the UI thread whenever that code runs, usually slowing your program down.  With DLLs, you have a choice.  If your DLL is multithread safe, you can configure it to be reentrant and get a large performance boost.
Message 3 of 18
(3,304 Views)


@DFGray wrote:
There is an important performance difference between CINs and DLLs.  CINs always execute in the UI thread.  This means a swap to the UI thread whenever that code runs, usually slowing your program down.  With DLLs, you have a choice.  If your DLL is multithread safe, you can configure it to be reentrant and get a large performance boost.


That is not true! Adding an optional CINProperties() function with specific parameter handling you can tell LabVIEW that the CIN is reentrant and the CIN node icon will accordingly show light yellow. The difference is that here the CIN developer defines if it is reentrant (which is good since if someone should know it is the developer) where as for DLLs the person creating the VI has to know (or guess).

But that is still no good reason to use CINs. There is virtually nothing you can do with CINs that you can't do with shared libraries/DLLs but they are a lot easier to create, develop, debug and maintain in the long run.

Rolf Kalbermatter

Rolf Kalbermatter
My Blog
Message 4 of 18
(3,301 Views)
I stand corrected.  I have only experienced the user view of a CIN.  Thanks for the info!
Message 5 of 18
(3,278 Views)
Hi,

I wrote a vi to take the average of an array of numbers, and noticed that the c code from the call function node took longer, up to about 5 times longer. Is this true in general. It seems there is some overhead from calling the c code. Is there some point in the amount of data where calling the c code will be faster. I have attached the vi here as well.
0 Kudos
Message 6 of 18
(3,272 Views)
What is involved in your "parsing of bit streams"?
 
I agree with rolf that LabVIEW can be extremely efficient, even with complex bit operations. A nice example is the first LabVIEW coding challenge from 2002. Try to solve it yourself, then study the results, you might get a few surprises. 🙂
 
For efficient code:
  1. Avoid data copies and buffer allocations
  2. Do computations "in place"
  3. Use fixed size arrays (avoid e.g. build array, insert into array, delete from array, etc.)
  4. ...

If you would give us an example of the parsing you need to do, maybe the collective brain of the forums will come up with a pore LabVIEW solution that is simpler and orders of magnitude faster. 😄

Message Edited by altenbach on 01-17-2007 10:01 AM

Message 7 of 18
(3,260 Views)
For anyone who clicked on the results page altenbach linked, the enlarge functions don't seem to work to view the submission screenshots - at least not for me with Firefox or IE7 (am I the only one?). So I'm manually adding the pics of those bit-twiddling submissions here from top to bottom. Enjoy!

http://zone.ni.com/images/devzone/us/codingchallenge/simple.gif
http://zone.ni.com/images/devzone/us/codingchallenge/simple_with_lookup.gif
http://zone.ni.com/images/devzone/us/codingchallenge/two_at_a_time.gif
http://zone.ni.com/images/devzone/us/codingchallenge/biglookuptable.gif
http://zone.ni.com/images/devzone/us/codingchallenge/winner.gif

Jarrod S.
National Instruments
Message 8 of 18
(3,240 Views)
Hi,

Thank you for your posts.

My parsing bit stream application is different from the one described in the challenge.

It is described in more detail in the word file attached. I have also included the existing labview application I have used to do this as well as a set of test data in excel. The test data only has 2 records however.

Any ideas would be helpful.

-Tim
Download All
0 Kudos
Message 9 of 18
(3,239 Views)
Two quick notes:
  1. As seen in the screenshot below, you're commonly comparing a U16 array with a U8 array. That red dot (it might appear grey in your version) means LV has to convert your U8 array data up to a U16 every time the comparison happens. That is certainly not optimal. Decide which numeric data type encompases all the data you need to work with and use that data type. Or at least do a manual conversion outside the loop from U8 to U16 so LV doesn't have to waste time repeating this process.
  2. Look for more occasions to use auto-indexing in your loops instead of manual indexing. I found at least one instance of this.

Message Edited by Jarrod S. on 01-17-2007 02:08 PM

Jarrod S.
National Instruments
Message 10 of 18
(3,232 Views)