08-12-2008 12:28 PM
09-26-2008 08:21 AM
Hello again,
In my distributed cRIO system (9 cRIO's and 4 PC's) I have decided to use the scan engine coupled with an intermediate layer at the RT level to provide
additional features such as programmable channel scaling, filtering, dead banding, CVT binding, and Forced I/O which would be very handy for providing HMI's with
a transparent means to go into 'manual' mode and override RT control of I/O resources. I suppose I could live with a 900mS delay between when you push a button on the HMI
and an effect is produced in the cRIO hardware but I am afraid it would make my HMI feel very unresponsive and would be an embarrasing aspect to an otherwise well constructed design.
I would like to see if it might be possible to obtain some beta code that would allow me to temporarily get around this problem until LV 8.6.1, is that possible in the short term or should
I abandon the use of forced I/O for my application?
Thanks.
09-26-2008 11:10 AM
09-26-2008 11:25 AM
Thanks for your reply. I am running my test case on a local cRIO-9074. My test case forces an analog input channel on a 9237 module. I am seeing ~900ms transaction times. Could you please send me your example for comparison. I have not tested it yet, but am thinking about using an asynchronous wrapper around the force variable vi so that at least it would not stall my HMI.
09-26-2008 11:30 AM
09-26-2008 11:52 AM
09-26-2008 11:57 AM
sachsm wrote:
The real god send would be NI providing a programmable I/O scaling API within the scan engine.
Can you describe in more detail what you mean by this? Do you mean the ability to programmatically change the scaling currently provided on I/O variables? Or something else?
thanks for your feedback,
greg
09-26-2008 12:55 PM
09-26-2008 01:01 PM
sachsm wrote:
Yes, that is exactly what I need. It is a requirement of the system I am designing to have an external spreadsheet containing mx+b scaling, filtering cutoff, zero offset, Deadband%, etc. This informaion gets compiled and ftp'd to the targets. If I could do the scaling directly in the scan engine then it would significantly simply my design. Even if I knew it was forthcoming in LV8.6.1 then I might be able to defer and just manually enter all of the 100's of scaling coefficients directly in the project. When the feature become available my configuration management tools would take over that responsibilty.
Is the current scaling (mx+b) provided by IO variables plus a programmatic configuration API sufficient for your use case, or would you also need filtering cutoff, zero offset, Deadband%, etc?
thanks,
greg
09-26-2008 01:22 PM
Surely you are teasing me 🙂 Of course I would like all of the above since it will never be as efficient for me to have to intercept the scan engine data and process it myself. Especially in regard to filtering since it would be
much better to do at the higher sample rates available in the scan engine. My LPF currently runs at a sample rate of 50Hz. But I think for now the greatest simplification would come from being able to externally manage the mx+b scaling. Also a additional zero offset for on the fly calibration would be nice. I understand you guys are trying to prioritize the most requested features and that managed mx+b scaling is high on the list. Will there be any intermediate pioneer releases? Can you state that it will be in the LV8.6.1 release?