10-14-2015 07:31 PM
Hey guys!
Quick question about the Eval Formula Node. I've inherited some code that reads raw values from Compact Field Point modules (over 700), and performs calibration on them. The calibration is currently hard coded within the LabVIEW code. My job is to make this all generic. Meaning, a system operator can make a change to the calibration through a Database, and this must reflect within the code. So no more hard coding of mathematical expressions.
I've stored all the calibration equations as Strings (all are one input, one output) within a MySQL database. I've also created a class hierarchy which uses the polymorphic dynamic data dispatch feature to decide which type of equation to use at run time. To convert the String into a Mathematical expression, I use Eval Formula Node (NI Math G library).
Working on a i7 laptop with 2.4GhZ behind me, I can complete a DAQ loop of 700 channels within 1.5 seconds (including acquisition time). Switching to a 400 MhZ cFP controller....takes 16 seconds. Way too long (I want to be closer to the 1 second mark)
I've searched through forums and some people are suggesting LabPython. Any idea which alternative is good?
My immediate thought is to rip out the guts of the Eval Formula Node VI and just use the internal Coded Eval Parsed Formula String VI. This way I can convert the String expression into the Parsed Formula Node cluster ONCE upon instantiation, and store the bundle within the object. When in the DAQ loop I can retrieve the bundle and apply the calibration.
Any thoughts or ideas?
Thanks!
Shivam
10-14-2015 08:20 PM
I have learned that when dealing with RT systems, it is best to avoid strings. And the parsing of formulas and what not is really slow. Maybe a better approach would be to use an enum to state what type of formula and an array of the parameters. Then you just use a case structure to do the math needed on the channel.
10-14-2015 08:33 PM
Hey crossrulz!
Thanks for the very quick reply.
Your definitely right about avoiding strings and parsing. However, they have a system in which they would like to swap out sensors at will. One case is a general calibration equation, which could be anything from y = mx + b, to y = arcsin(x) + x^2 etc. They want a solution in which they won't need to touch the code again, just plug and play with a web based channel editor. So I'll need to parse still...
10-15-2015 04:24 AM - edited 10-15-2015 04:29 AM
Yeah, you will either need to find a way to convert the string into an enum (of supported calibrations) and an array of parameters so you can use native maths functions rather than string parsing on the fly. Perhaps you should agree a list of supported calibrations? Perhaps you could implement something whereby you have some standard (faster) calibrations (e.g. linear, polynomial) but also have the option of 'custom' which are evaluated using the formula node.
700 signals sounds like a lot of string parsing for an RT controller!
If that isn't going to work for them, then I suggest you either avoid doing the calibration on the RT system (log raw values and post-process or do the calibrations on a windows host) or upgrade your system - a 400Mhz controller isn't very powerful when you compare it to some of the newer RT cRIOs - the very base models are 667Mhz dual core and go up to 2Ghz quad-core. I've never used cFP before but it looks like there isn't really much of an upgrade path as you have the fastest chassis from what I could see.
Oh, and perhaps if you post your code, there might be other ways we can see that might improve the performance?
10-15-2015 05:04 AM
This migh be a start into something a bit more performant than the NI solution. The parser and evaluator are completely separated so you could parse the formula on the UI side (host computer) and store the resulting tree as binary structure in the RT side and execute it. Even if the parsing happens on the RT side (of course not every time the formula is executed but only on startup or after the formula has been edited) it has a much more friendly memory behaviour since it does not constantly create substrings and concatenating them back into other strings.
10-15-2015 10:16 AM
Hey rolfk!
Many thanks for your suggestions. I'll definitely look into your package when I have more time.
Looks like seperating the parsing from the eval worked nicely. Parsing the following equation
y = ((3.9083E-3) - sqrt((3.9083E-3)^2 + 4*(5.775E-7)*(1 - x/500))) / (2*5.775E-7) 100 times completes in about 17 seconds
Evaluating it completes in 100 ms, which at 1ms per channel, is pretty good!
10-15-2015 10:20 AM
Thanks Sam!
A lot of the equations have no calibration, so instead of doing a Formula on those (for example, putting the formula as y=x), I just define a class called None which doesn't use the formula node. Some other pulse calculations also have pre-set formulas. It's just for the general case.
If you look at my last post, I tested the parser and evaluater seperately and got pretty good performance for now. I can optimize further knowing that we don't have multiple input and multple outputs, and so the the Evaluation VI doesn't need to be put into a loop, and consequently, the dimensions of the Parsing Bundle go down.
Definitely agree about upgrading their hardware but that's not something they want to do at the moment...
10-15-2015 10:51 AM
Shaved a further 10 ms-12 ms on average by directly using the VI - Coded Eval Parsed Formula String.
I modified this VI assuming single input single output, and so further optimized by removing the outer loop within the VI. Once I tested, I also removed Debugging features.
Once I clean it up I'll post 🙂
10-15-2015 10:55 AM
Yup, splitting the parse and evaluate functions is definitely the way forward, Whether Rolf's linked method or splitting the existing LabVIEW function - just go with whatever code looks cleaner or better meets your needs (e.g. speed comparison)