LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Need to improve NXG Web module performance, looking for suggestions.

I developed a NXG web console dashboard  that has a couple of charts on it. I need to display 6 hours worth of data with a one second resolution. That means that there will be 21600 data points for each element. Currenlty the dashboard has two charts. One chart displays two data points. The other chart displays a single chart that the user selects one of several devices they want to display. However, this means that the dashboard needs to hold the data for all the devices and display the correct set based on the users selection. To improve memory, the charts are graphing integers, not floating point values. In fact, the elements are U16. When initially developed I was displaying a one hour interval and the performance generally kept up. Now that the charts have a deeper history, the updates are only occurring about every three seconds.

 

When the page is initially loaded it makes a request for the data sets and gets the 6 hours worth of data. I am not overly concerned with the initial load time. Currently this is only taking a few seconds. Once loaded, the dashboard makes a request for the latest data elements. It needs to make two requests for the data. The latest update simply returns a single data point for each value. I have benchmarked this and see that the actual request for the new data is fairly fast. I believe that the problem is with manipulating the data buffers for the charts. Currently I am using arrays and split the array if the is at the max history depth. Would using queues with a "Lossy enqueue" be better performance? Do web module VIs take advantage of parallel processing? Any other thoughts to improve performance?

 

I realize that I have not posted the code. Currently I do not have access to it. Also, I would probably have to post example codes and not the actual code itself.



Mark Yedinak
Certified LabVIEW Architect
LabVIEW Champion

"Does anyone know where the love of God goes when the waves turn the minutes to hours?"
Wreck of the Edmund Fitzgerald - Gordon Lightfoot
0 Kudos
Message 1 of 8
(2,183 Views)

If you keep the 6 hour history, but only view 1 hour's worth of data, does that make a difference to update performance? Curious to know if it's a display issue, or a data processing issue with the chart.

 

You mention data buffers for the charts - are you managing the chart history yourself, or is the chart managing its own history and you're writing a new value to it?

 

If you are managing the history yourself, I'd try adding a funtion to decimate the 21600 points down to the number visible number of pixels of the chart, using a min/max function to capture the peak data per pixel. Then you'd only be writing maybe 1000-2000 points to the chart.




Certified LabVIEW Architect
Unless otherwise stated, all code snippets and examples provided
by me are "as is", and are free to use and modify without attribution.
0 Kudos
Message 2 of 8
(2,139 Views)

@MichaelBalzer wrote:

If you keep the 6 hour history, but only view 1 hour's worth of data, does that make a difference to update performance? Curious to know if it's a display issue, or a data processing issue with the chart.

 

You mention data buffers for the charts - are you managing the chart history yourself, or is the chart managing its own history and you're writing a new value to it?

 

If you are managing the history yourself, I'd try adding a funtion to decimate the 21600 points down to the number visible number of pixels of the chart, using a min/max function to capture the peak data per pixel. Then you'd only be writing maybe 1000-2000 points to the chart.


Thanks for the suggestions. I'll take a look at this. I am managing my own history. WebVIs are more limited in what is available and I don't believe the display chart is capable of managing the history. I need to display the full 6 hours. I like the idea of decimating the data and displaying fewer points. I will have to look at that.



Mark Yedinak
Certified LabVIEW Architect
LabVIEW Champion

"Does anyone know where the love of God goes when the waves turn the minutes to hours?"
Wreck of the Edmund Fitzgerald - Gordon Lightfoot
0 Kudos
Message 3 of 8
(2,114 Views)

@Mark_Yedinak wrote:

@MichaelBalzer wrote:

If you keep the 6 hour history, but only view 1 hour's worth of data, does that make a difference to update performance? Curious to know if it's a display issue, or a data processing issue with the chart.

 

You mention data buffers for the charts - are you managing the chart history yourself, or is the chart managing its own history and you're writing a new value to it?

 

If you are managing the history yourself, I'd try adding a funtion to decimate the 21600 points down to the number visible number of pixels of the chart, using a min/max function to capture the peak data per pixel. Then you'd only be writing maybe 1000-2000 points to the chart.


Thanks for the suggestions. I'll take a look at this. I am managing my own history. WebVIs are more limited in what is available and I don't believe the display chart is capable of managing the history. I need to display the full 6 hours. I like the idea of decimating the data and displaying fewer points. I will have to look at that.


I often keep a history of data in a DVR and use a Min/Max decimation for the data to display. The decimation factor depends on the plot width, see this link and this one.

 

If the user zooms in on a section of the plot, I just redo the decimation with new bounds on the history data. It works well, is fast, and has a small memory footprint.

 

mcduff

 

0 Kudos
Message 4 of 8
(2,110 Views)

In terms of decimation, if you use SQlite you can get every Nth point via query.

 

If I recall correctly, when I tried this with an on-disk database it needed quite a large decimation factor to be faster than reading everything, but if you use an in-memory database it might be more practical at the factors you have (about 1/10 to 1/20, I guess?)

 

Maybe more work to refactoring than you're willing to try on a vague idea, but perhaps worth considering.


GCentral
0 Kudos
Message 5 of 8
(2,103 Views)

The first thing I would consider is no buffering on the client side at all; decimate on the server down to only the data you need to display.  3000 XY pairs at SGL precision is only 24kB of data.  Anything changes, including time passing, just refetches the full displayed data.

0 Kudos
Message 6 of 8
(2,088 Views)

@drjdpowell wrote:

The first thing I would consider is no buffering on the client side at all; decimate on the server down to only the data you need to display.  3000 XY pairs at SGL precision is only 24kB of data.  Anything changes, including time passing, just refetches the full displayed data.


 

The data feed is a REST API, so the data is in JSON format. That makes the size of the data transmitted MUCH larger. This is live data being graphed. I suppose I could look at switching to a web socket since the data is basically streaming. Since this is live data it is more efficient from a communications stand point to only send the updates rather than resend the entire data set. If I go web socket, I need to find or write a library since there is no native web socket in LabVIEW. This would also mean I would need to rewrite my data server. As a REST API, the client request the data updates. If I go web sockets I would need to spin up the handlers and have those get the data updates from the date service. This is part of a large distributed messaged based system. The internal messaging is a proprietary protocol and the data service is the bridge to the web based clients.

 

I appreciate all the ideas. I'm sure something will make sense. Also, I will be changing the refresh rate to once every 5 seconds which will also help.



Mark Yedinak
Certified LabVIEW Architect
LabVIEW Champion

"Does anyone know where the love of God goes when the waves turn the minutes to hours?"
Wreck of the Edmund Fitzgerald - Gordon Lightfoot
0 Kudos
Message 7 of 8
(2,080 Views)

Base64 can encode binary at 4/3 size into a string compatible with JSON.  

 

Anyway, another option is just improve the performance of your buffering to avoid any copies, such as with a ring buffer and decimation for display.

0 Kudos
Message 8 of 8
(2,072 Views)