So lets say I have an XNet session as a Signal Single Point Read. I'm reading some signals and things are working great because I have a device on the bus spitting data out at an expected rate. Then lets say my device turns off. The Single Point mode will hold the last read value for what seems like an indefinite amount of time. I've mention this before and it always seemed like a bug to me, which is partially why I wrote my own Frame to Signal Conversion, which takes raw frames and returns NaN for signals that aren't present in the frame.
So now I'm trying to optimize some of my code, and offloading as much work to the hardware would be best. So instead of performing all the frame to signal conversion on software I wanted to take advantage of the XNet API and have it return the signal values. The only issue I have with this is the issue I just mentioned where signal values are held forever. Is there some XNet feature I'm unaware of that allows for signals to revert to NaN after some amount of time if an update to the signal isn't seen?
Another option I thought about is maybe have two sessions, one reading all frames and keeping track of the IDs seen, and then if an ID isn't seen that was before, find which frames it belongs to and replace the values returned from the Signal read with NaN. I'm just not sure all of this effort will save much over the G implementation of the conversion. Thanks.
Solved! Go to Solution.
There is not an XNET feature that currently does this, so your idea to use a second XNET session would probably be the best solution. Otherwise, since the frames are basically arrays, you could manually check them for repeats of the same values for X number of frames that is equivalent to a certain time and then assume it disconnected and replace all but the first repeated value with NaN.
Thanks for the information. Yes the other option I thought about after posting this was to use the Signal XY method. Here I get the time and signal value with every frame read. If it is empty then I know that frame hasn't came in since the last read. To effectively create a single point all I should need to do is get the last value in the array of values read, and keep track of the time that it came in, and if it is empty for too long to set it back to NaN. I'm unsure which is more efficient since one method has me taking in raw frames (which there are many I don't care about) and finding the IDs that I'm interested in. But the other method will return all values for all signals which is also not necessary but I suspect this method might be more efficient for a heavily loaded bus. If I get some time I'll try to come up with an example of what I was thinking.
Okay attached is my attempt at using the XY session type to accomplish what I was talking about. It will hold a value for up to two seconds if the signal hasn't been read in that amount of time, at which point it will revert back to NaN. Still this would be helpful if the functionality were built into the XNet API.
Okay Idea Exchange posted. While I was working I also thought about having a function similar to the DAQmx logging, where the API could log directly to a TDMS file instead of having to read them and then write with software, so I posted that idea as well.
It has been a while, but I think there is a trigger signal that will let you know if the data is new or not. You may be able to use that to create your NaN value..
Kudo's for teaching me something new, but I still see some difficulties in the implementation going with this technique.
So on start I can look at the signals, and find all the common frames, and add triggers to those frame types (ignoring the less common mux signals for now). Now I'll have my read return a single array of values with actual values and trigger values in it. I'll then need to keep track of which indexes of my trigger signals, belong to which indexes of the actual values (with expected overlap). This technique means minimizing the number of trigger signals needed. Then I have the difficulty in the fact that the trigger will return a 0 since the last read, not for some amount of configurable time. I likely want to update the signal values pretty often, say around 100ms or faster, but I may want a hold to be on the order of 1 or 2 seconds. And signals will be coming in any rate from 10ms to 1s. This will mean having to keep track of when the trigger value went to 0, and then see if it is still zero, after the hold amount of time. And only then should it clear out the value to NaN.
So this technique will mean I don't need to read the XY signal type, which I assume is more intensive since I'll be throwing away the many values that aren't the newest. But this technique does mean less signals to start the session with which I assume is also important for performance. The trigger technique is probably better, but I do have a request once in a while to calculate the min/max of a signal, and for this I'll need the XY values anyway.
Maybe if I get some time I'll benchmark these two solutions and see how they perform. Thanks again (but still a single property node for this would be awesome)
EDIT: Okay so you can't mix frame and signals that makes sense. So you need one Trigger signal for each normal signal.
Okay attached is the Trigger Technique for holding a signals values. It appears to be more efficient than the XY method I mentioned earlier, assuming you really only do want the latest value. I'm not sure how this will be added to my overall application since as I mentioned min/max is sometimes useful, in which case the XY is a better method but still.
I tested this on a lower power embedded Linux RT chassis, cDAQ-9132. The CPU usage was between 6% and 12% running the earlier XY method with the bus loaded to about 60%, and the CPU was between 5% and 8% when running the trigger method. Not incredibly different but enough to justify this as a preferred method if the latest value is really what you want. Also this data was for a only wanting to read a few signals, if there are many more to read at once I'm unsure how the results would change.