From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.
We appreciate your patience as we improve our online experience.
From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.
We appreciate your patience as we improve our online experience.
03-20-2017 12:29 PM
I have a machining task, the designs arrive as 2D arrays but I don't want to waste machine time by machining elements of zero depth. Stripping the zeros is slow, the operator may have to wait up to 30 minutes for LabVIEW to finish stripping the zeros. Can someone please show me how to use all 4 cores of the processor to speed up the process. VIs attached. I've included an example plot that shows desired input and output of the VI.
Solved! Go to Solution.
03-20-2017 12:44 PM
Start by making "IndexOfZeros.vi" re-entrant and try it again.
Ben
03-20-2017 01:48 PM
Not having terminals on the root of the diagram can cause inefficiencies.
Having the VI not be reentrant can be limiting.
Disabling automatic error handling and debugging can help performance.
I also made an improvement where if the line is all zeros then there is no need to perform search on the array that is relatively slow. Attached is an example that does a speed test on the original and my method which is about 5-8 times faster with these changes. What is considered slow? Even the original method is usually less than 1ms on my machine.
Unofficial Forum Rules and Guidelines
Get going with G! - LabVIEW Wiki.
16 Part Blog on Automotive CAN bus. - Hooovahh - LabVIEW Overlord
03-20-2017 02:07 PM - edited 03-20-2017 02:17 PM
03-21-2017 11:35 AM
thanks for the replies and speed pointers, I used Hoovaah's code with a large circular array to benchmark zero-stripping of a large file. I had incorrectly thought the zero stripping was the time consuming VI, it was actually the file write and indicators on the front panel.
I replaced my file write scheme using the basic blocks of the WriteToSpreadsheetFile.vi as an example.
In the end I read in a 2D array of 221,893KB from file, stripped and saved back to file in 55 seconds, which would be fine for the very largest of files.
03-21-2017 12:24 PM - edited 03-21-2017 12:45 PM
@bmann2000 wrote:
thanks for the replies and speed pointers, I used Hoovaah's code with a large circular array to benchmark zero-stripping of a large file. I had incorrectly thought the zero stripping was the time consuming VI, it was actually the file write and indicators on the front panel.
I replaced my file write scheme using the basic blocks of the WriteToSpreadsheetFile.vi as an example.
03-21-2017 12:42 PM - edited 03-21-2017 01:05 PM
@altenbach wrote:
- All you need to do for each row is find the first nonzero element and rotate the array accordingly. This can be done in-place.
For example, the following code gives the same result (on the sample array) and uses significantly less code (where bugs can hide!) than Brian's. (I am sure it can be improved further!). Since the first nonzero element occurs relatively early, it is not necessary to allocate that large boolean array and do many more comparisons with zero. (yes, the result will be slightly different if all rows are shorter).
03-22-2017 12:33 PM
The attached VI shows more clearly the problem, which I think we've solved. 20MB version of file attached due to upload limit. The last time I stripped zeros in a project it was minutes worth of 2GHz multi-channel sampling, so this time I incorrectly though the zero strip was the issue. The attached code demonstrates that file write text versus binary is the factor that slows the code down, in this section anyway, I have lots of other 2D array processing going on elsewhere in the application that may become the topic of another post.
Altenbach has touched on the bigger question of should I be doing this in the first place. I'm currently rewriting an application that hangs up on an out-of-memory error when processing a batch of large files typical of those in the example code. A large input file is typically 200MB of text which gets converted to DBL when read. Each line of the 2D array represents a line to be machined but it must be combined with transformation matrices, motion parameters, calibrations, timing triggers etc to produce machining data that can be fed to the hardware that performs the machining. In the old application that runs out of memory, the process of converting the 200MB of depth information to machining data causes the array to be copied in memory many times leading to the out of memory error. It also wastes time by always machining a rectangle even if the part is circular.
To get round these two limitations, I'm rewriting the application. Instead of manipulating a huge 2D array, I plan to process a line-at-a-time, buffering each line from file during the machining process. As each line is a different length, I figure that processing a line-at-a-time from file is preferable to using a cluster of arrays.
I also figure that the resulting text sequencer file will be easy to debug prior to run-time, then when the application is fully debugged, I can switch the file type to binary to speed up the pre-processing step.
03-22-2017 03:24 PM
03-23-2017 04:15 AM
Okay thanks, so when I switch over to binary after debug, I should zero pad to make each line the same length. e.g. Integer_LineLength,deptht0,depth1...depthN,pad0,pad0,pad0.
The design should stick with 2D arrays but ignore the zeros on the end. It still makes sense to get rid of the zeros at the start of the line and move them all to the end, otherwise it gets complicated as I'm calculating the start of line coordinates, run-in-distance, acceleration etc based on the coordinates of the first non-zero depth in the array. I don't want to deal with the XY coordinates of the zero-depth locations.