Automotive and Embedded Networks

cancel
Showing results for 
Search instead for 
Did you mean: 

XNET Write (Signal Single-point) Execution Time Fluctuations

Hello,

 

I am trying to improve some device driver that uses XNET API to monitor and control. Initially, I used one loop for XNET Read (Signal Single-point) and another loop for XNET Write (Signal Single-point) running in parallel. Write can be turned on and off while Read has to run constantly. I put timestamp VIs to measure their execution time as shown below.

 

1.png

Surprisingly, XNET Read's execution time was consistently 0 ms when Write was off and I was getting the expected CAN data for sure. However, whenever I turned on the XNET Write loop, XNET Read's time was dancing around although it was still mostly 0 ms. It seemed as if running Read and Write in parallel was causing some interference.

 

Then I serialized them in a single loop. XNET Read's time became consistently 0 ms regardless of Write's state. However, XNET Write's execution time is still fluctuating a lot between 0 and 90-something ms.

 

Why is this happening? What can I do to minimize XNET Write (Signal Single-point)'s execution time consistently?

 

Any insight will be appreciated.

 

0 Kudos
Message 1 of 2
(252 Views)

I'm not sure why you are seeing what you are, and I suspect only NI can know what parts of the code are shared between the reading loop and writing loop.  That being said for your design I would want to know more about the nature of what you are reading and how time critical that is to you.

 

In my design I have a writing single point loop.  This is in the Frame API, but that's because I sometimes need more control over the individual frame data.  It only calls the write when the values need to change and that shouldn't be all that frequent.  If you don't need the frame level control Signal should be fine.  The read loop in my design is the Frame API as well, but the reason for this is because it gives you every frame that has came in, along with time stamps of when they came in.  I have custom code that then grabs the newest frame for each ID, then does the frame to signal conversion.  This allows me to log all raw CAN frames, and see the time delta between everything.  But it has a higher level view of what the signal values are.  The Signal values I read aren't all that critical. I want to know what the temperature is, but if it is a few ms slow, or even a bit more then that doesn't bother me.  These signals are often seen on my main UI and the refresh rate is slow enough that it doesn't matter much to the user.  If your signals being displayed also don't matter if they are a bit slower I don't know if it really matters to you either.  

 

I'd maybe suggest experimenting with these different reading/writing session types and see if something jumps out at you as solving your issues.  Like I said our design of one Frame API Read, and one Frame API Write seems to be fine but I've never done timing testing for this type of thing. The critical nature of it is in the individual frames.  And if it takes longer to read, then you'll still get the queue of frames that have been waiting to be read.  The only issues happen when you can't read them as fast as they are being written.  Then you'll eventually overflow.

0 Kudos
Message 2 of 2
(213 Views)