LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Insert 2d array into 3d array

Solved!
Go to solution
Solution
Accepted by topic author Daikataro

@Daikataro wrote:

-Use replace array subset to replace the cluster containing the old information, with the updated cluster containing new data, it does nothing, gives me a completely empty array

 


That makes no sense. can you show us that code too? there must be a mistake. For example you need to initialize the shift register with an array of sufficient size of course, because you cannot "replace" an element that does not exist.

 

For the index-unbindle-append-bundle-replace step, you could use a stack of IPEs.

 

I think the array manipulations belong inside the no error case. If an error occurs, you are currently processing an empty array.

 

Overall, I still think you are unecessarily thrashing memory by constantly resizing inner elements of complicated structures. There has to be a better way.

Message 11 of 17
(1,296 Views)

Right indeed, the Array needed to be initialized with a value, once I did that the replace array subset function worked as it should, now if there was a way to not have the array of blank strings on top, that would be awesome, if not, I will just fill it with headers or something. The code that exhibited that functionality is attached in the previous post, thanks a lot, now moving on to the buffering part.

0 Kudos
Message 12 of 17
(1,292 Views)

Finally! Thanks for your valuable help, I finally managed to get the buffer going, what was needed was the initial value so every column could be written, so I just filled it with headers and now it works fine, I can get the data in there without issues. Yeah it was all meant to be in the error case, but this was just a "quick and dirty" case so of course it would be fragmented. The buffer function works fine now:

 

-If cycle number is the same as last iteration, the code continues to append data to the current values

-When cycle number changes, it checks for array size, if it's 1 (no buffer exists) it inserts the first value into the nearest available slot, aka position 1. If it's not 1 it just replaces whatever index 1 holds, with the most recent data. In both cases it writes only the headers to page 0 so it can be freshly written with new data. Here's the updated VI, if you have further advice to increase performance or improve coding, that would be great.

0 Kudos
Message 13 of 17
(1,286 Views)

You don't need to clear the error if the loop stops on error and nothing taps into the error wire.

You should keep the cluster array at fixed size with two elements.

 

here's what I came up with. See if it can give you some ideas.

Message 14 of 17
(1,275 Views)

So just to be thorough and because someone will probably benefit from the solution, an array is not the most optimal solution when you are handling large volumes of data. I did some field testing yesterday and, as the array fills with data, consumer loop performance starts to suffer with up to 200 miliseconds to finish, then on cycle change when it can offload the data to past cycle it suddenly speeds up and very quickly consumes thru queued actions until it slows down again, when buffer starts filling up, and that was on very fast settings on the system, I can only imagine the disaster it can cause on 30 seconds cycle time. I have since switched to a 4 tdms files approach (from just two) where one is buffer0 and the other buffer1, call those, fill buffer0 with data and on cycle change, recall data from buffer0, clear contents (if any) of buffer0 and 1 and fill 1 with that data instead, then input most recent data on buffer0. Execution time went from 200 to 10ms per cycle on simulated hardware, I'll do some more field testing today and see how it holds.

0 Kudos
Message 15 of 17
(1,228 Views)

Yes, I told you that the earlier "solutions" are very hard on the memory manager, because you constantly need to re-allocate data in memory, one of the most expensive operations in any programming language. Arrays are contiguous in memory so when local space runs out, a new (larger) space needs to be allocated and everything moved over. Ad infinitum. Eventually you'll run out of sufficiently sized contiguous memory and the program can no longer continue. You also have the additional problem that you display these gigantic data structures on the front panel, 

 

Another conplication is your use of string arrays. The data (incl timestamp) could be stored in a plain 2d DBL array, for example. Since each field is now exactly 8 bytes, you could even stream to a flat binary file where the file offset for each row or element can be calculated from first principles.

 

If you want to stay in memory, you should decide on an upper size limit for the data structures, allocate once (e.g. with all NaN), then replace with real data as it arrives. Your outer array is fixed size and has only two elements. It might be worth to simply use two shift registers with a simple 2D DBL array each. There is not even a need to swap things on backup, just keep a recond of the currently active array. For the front panel indicator, all you need to format to strings and display is the currently visible tiny subset (a dozen rows out of many thousands!). You could design your own scrollbar that just selects a different subset for the indicators based on scroll position. Why pump millions of bytes through the UI thread if all you ever see is a few handful or rows?

 

With a properly designed program, I am sure there is at least another order of magnitude improvement possible, probably much more. 😄

0 Kudos
Message 16 of 17
(1,221 Views)

Actually, in the main program, the buffer will never be visible to the user, I just displayed it on the "debugging" subcode for debugging purposes exactly, in the main code the user gets exactly one new row of data every 15 seconds or so; the data will need to stay a 2D array of strings because, along with timestamp, I need to include miscellaneous data about the test such as user configurations, which are string data. tdms files are working well so far, what I do is store current cycle data as "filename_buffer0" and on cycle change, extract data from "buffer0", then replace "buffer0" and "buffer1" with blank tdms files, then dump the extracted data on 1 and record current data on 0. 

 

I'm having positive results with tdms, and now the code only starts slowing down when I have a very large (50+ Mb) tdms I need to append data to, which in turn slows the queue. Empirically, I noticed a 100ms period between sample "batches" seems to work fine up to 100Mb files, so I was thinking to add some code that checked log file size on every queue event and, if larger than say, 50Mb (so I can have 50ms batches), creates a new blank tdms named filename_x where x are consecutive numbers and start recording on that instead, so in the end I would have a bunch of roughly 50Mb files that together, contain the entirety of sampled data, without being too hard on processing. Is there a better way to agilize code?

0 Kudos
Message 17 of 17
(1,216 Views)