01-08-2021 03:49 AM
We are trying to develop a data analysis software using LabVIEW.
Due to the large amount of data operations and the huge amount of data, each analysis requires a long wait.
We have tried to configure parallel loops for some algorithms that can be parallelized. The efficiency is optimized as much as possible.
I want to further speed up the analysis by upgrading the hardware.
I am considering the latest generation of Intel Core i7, i9, or Intel Xeon series, or AMD 5900x.
Is clock speed more important or core?
Does Core i9 and Xeon have compatibility issues? We have found that the 3900X will have compatibility issues?
In addition, is it possible to improve the drawing efficiency of LabVIEW waveform graphs through GPU? I need to draw about 500k points on 128 waveforms per second.
01-08-2021 05:04 AM
Compatibility is fine between all.
It's hard to say which CPU will be faster, it depends on how parallel your processes are. But with your current computer, does it load all cores in a good manner? If so, I'd go with core count.
Do you need to graph all points? Usually you do some decimation or averaging to show a reasonable amount of points, after all, a screen is only 2k or 4k pixels wide, so you want to overdraw a hundred points per pixel ...
If you want faster graphs there's an addon called Arction you can buy, it's very fast graphs but you need to handle them on a pretty low level.
01-08-2021 05:30 AM
Due to algorithm limitations, the performance of multi-core cannot be used well.
I have tried to use Arction in LabVIEW, but failed to use it.
There will be a problem that the VI cannot be saved after modifying the program. Use the chart through the .NET container.
I have tried downsampling, but because the collected signals are special, the high frequency signal characteristics cannot be displayed after downsampling.
There are many fast spikes in the signal, and downsampling may lose spikes.
01-08-2021 07:26 AM
You say "the efficiency has been optimized as much as possible."
There's a long history around here of posters *thinking* that to be true, only to find that some of the experts here are able to improve things by an order of magnitude (and sometimes *many* orders).
I don't know your background or your app, but I must note that you're trying to draw 500k pts each for 128 waveforms to a screen that's maybe only 2k pixels wide. That doesn't strike me as maximum optimization, giving me more reason to suspect that other parts of the code might benefit from some expert eyes.
Please post the processing code and some typical data, and let's see if anyone can speed things up enough that you don't need to worry about CPU upgrades.
-Kevin P
01-08-2021 07:43 AM - edited 01-08-2021 07:48 AM
If you need to preserve spikes in the data, you need to use a different downsampling approach than just picking every n-th element. I have done this in the past by selecting the min and max value of each individual interval rather than just the middle point of that interval. There are other possible approaches such as using the average AND the max of each interval. It all depends on the data and characteristics you want to make visible.
But displaying 500K samples on any screen will be probably for many decades still a huge overkill. If you can go by the last 30 years we had typically a quadrupling of the number of pixels per dimension for screens. Interpolated to another 30 years we will probably have 10k pixels in the horizontal direction by then and 20k pixels if you can throw a lot of money at it.
01-08-2021 09:10 AM
I like Rolfs idea, if you downsample to a reasonable amount, say 6000 points, you can have take max, min and avg for each group and plot them. Then you'd see spikes and trends.
01-09-2021 12:03 PM
My favorite saying in engineering is "you don't know what you don't know."
It's very applicable here. If you're asking about processor specs with respect to optimization, you're implicitly saying "I don't know a lot about optimization."
When that's something you don't know, your best path is to ask for help with optimization rather than providing what you believe to be the best path forward. You're likely missing other things. If so, you're putting a restriction on your solution that prevents you from getting to a more ideal solution.
01-10-2021 09:52 PM
I agree with your opinion. But my situation is that time is more precious. If a better computer can temporarily solve part of the problem, the whole project can progress faster.
01-10-2021 10:01 PM
Yes, we also used this method of reducing the sampling rate.
You mentioned Arction before, do you have the experience of successfully applying this control in LabVIEW?
01-10-2021 10:03 PM
This solution is a question of whether time or money is more effective in the short term.
In the long term, the software itself will be optimized, but in the short term it seems that it will be better to replace the computer.