LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

How I should make it to use memory efficient?

Hello All
I have the following problem:
Thse structure is like this:
Array1(cluster1)
Clusetr1: numeric1, Array2, Array3
Array2(cluster2)
cluster2:numeric2, array3(two vectors)
Array3(cluster3)
cluster3:(two vectors)

As you can see it is a meassy stuff, and also the memory utilisation and efficiency is very poor. It is enough that I add some of the elements in one of the vector in array3 and all the structure must be copied to another place in memory.

I was thinking of having something like:

Array1(references to Cluster1)
CLuster1: numeric1, Array2(references to cluster2), Array3(references to cluster3)

Than
addying a new elementwill cause only addying another reference in the given array.

I am not sure if I make it clear, but if not, where I can read about the tips of managing the complicated structures including the arrays of clusters of arrays, etc.

Thanks in advance
Pawel
0 Kudos
Message 1 of 9
(3,081 Views)
As you indicated in your post the really big problem with very complex structures can be performance adding and deleting data.

First, if you could give a little more information on the background of this structure and how it all ended up in the same cluster that would be helpful.

Coming up with the right data structures for an application can be time-consuming, but you get the structure right and the code will almost write itself.

Second, how much data will this structure be holding number of elements in each array?)?

Third, what do you mean by "references to cluster"?

Mike...

Certified Professional Instructor
Certified LabVIEW Architect
LabVIEW Champion

"... after all, He's not a tame lion..."

For help with grief and grieving.
0 Kudos
Message 2 of 9
(3,081 Views)
> As you can see it is a meassy stuff, and also the memory utilisation
> and efficiency is very poor. It is enough that I add some of the
> elements in one of the vector in array3 and all the structure must be
> copied to another place in memory.
>

You state that all of the structure must be copied when the vector is
moved. That isn't necessarily the case. An array is implemented by
reference already, so adding an element to an array will indeed need to
resize the array you are adding to, but the internal arrays will just
copy the references and not the contents.

The real thing to avoid is making copies of the top level array, and
making too many accesses to the inner elements. If accessed primarily
by nested loops, this should work quite nicely.

O
ne idea that you may be contemplating, is to use control references as
if they are data pointers. Problem is, the control reference is a UI
element. It is not an efficient way to reference data, and you should
instead either build the type you are using now, or move to something
where you use an array of integers to "point" to elements stored in
another array. This is useful when you want to do things like reorder
an array without touching all of the elements.

Greg McKaskle
0 Kudos
Message 3 of 9
(3,081 Views)
The reference to a cluster or to any object I mean the Unsigned Integer pointing the place in the memory, where the data are storred. I was thinking, that then, when one of the elements is changed (the size of the matrix), all others are unchanged, becasue they are seperated objects, and the reference still remains UI.

Now, I'd like to explain the background of this, as you sugested, maybe you will find a better object to store the data.

The problem is to take a measurement of the multichannel device, where every channel must undergoo three specific measurement.

So we have:

1) Impedance
2) OCV_MaIN
3) Direction OCV

Ad. 1) Impedance is the array containing the frequency vector1 (about 100 points - DBL) and resulting impedance vector (100
- CPLX)

Ad. 2) OCV_MAIN - array containg the frequency vector2 (about 200 points - DBL) and resulting vector (200 - DBL)

Ad. 3) Direction OCV - an array of OCV_MAIN, array of cluster that contains: Angle - DBL, OCV_MAIN (10-30 points).

I could get rid of 2) by making it e.g. position 1) in 3) but ussually the vector in 2) is much bigger than that in 3) so I do not want to make the whole matrix huge due to that and also the phisical measning of the measurement is different.

I enclose the control I am talking about, and please fell free to make any sugestions.

kind regards
Pawel
0 Kudos
Message 4 of 9
(3,081 Views)
Hi Greg

I am not sure if I get you right about the last comment. You say:

"... move to something
where you use an array of integers to "point" to elements stored in
another array. "

As I see it, that would not solve my problems with the memory alocation in case of change the array size. The system will still need to enlarge every and single element of the array I am changing for every element in the top level array, I guess.

Are there any sugesstions in Ni.com how to manage the complicated controlls?

Thanks a lot
Pawel
0 Kudos
Message 5 of 9
(3,081 Views)
> "... move to something
> where you use an array of integers to "point" to elements stored in
> another array. "
>

Sorry I didn't go into more detail, but I wasn't sure this would help.
The technique I mentioned has to do with building parallel arrays to
hold different elements of a struct, or building a parallel array that
gives you quicker searches of a complex datatype. For example, if you
have an array of personel info with names, ages, social security
numbers, etc. You sometimes want to find someone by name, sometimes by
social, and perhaps the array is originally given to you in age sorted
order.

This would mean that everytime you try to find someone by name, you
can't use a binary search or even stop when you pass the name and know
they aren't in the array. You have to look at every element. The
technique would be to build two parallel lookup arrays. The string
sorted array would simply be an array of the integers 0 through N-1,
ordered so that the first element contains the index of the first
element by string. The second element contains the index of the second,
and so on. So now you have essentially pointers from the index array to
the storage array. You build another to keep order by social, and as
many others as you like. These of course have to be updated when a new
record is added, so there is a cost, but for database types of access to
a largly static data set, this can help a lot on efficiency.

The other point where this helps is when there is lots of redundancy in
the info stored in the main array. As an example, lets say you are
storing cartesian points to describe 3D shapes. One way would simply be
to have arrays of faces, each face having an array of 3D vertices. This
works, but you aren't taking advantage of the fact that each vertex is
typically shared by three or more faces. In our structure it is
duplicated again and again, and actually makes it difficult to reshape a
3D shape.

A better structure, using the index technique is to build an array of
faces, each face has an array of vertex indices. The shape also has an
array of vertices, in no particular order, but the face vertices can now
"point" to any vertex and the faces now share it.

To test it, put a cube in the structures. The cube has six faces and
only eight points. In the first structure, there are six face arrays,
each with four 3D points for a total of 24 3D coords. The second has
six faces, each with four integer indices and the shape has a second
array of eight coordinates. It doesn't duplicate information making it
easier to update, it uses less storage, and it is quicker to access for
many operations.


> As I see it, that would not solve my problems with the memory
> alocation in case of change the array size. The system will still need
> to enlarge every and single element of the array I am changing for
> every element in the top level array, I guess.
>

When an array grows, that array may need to be moved in memory, since
arrays are stored in contiguous memory. If it is an array of one
million I8s, and you add one more byte, the 1M elements may need to move
so that the additional element can be stored with them. By contrast, an
array of 1,000 arrays of I8s, stores the same info. When adding a new
element, to one of the 1000 element arrays, the other 999,000 elements
do not need to move. Adding a new array to the 1000 already there means
that the array of 1000 array references may move, but none of the I8
elements will be moved.

If you want more help, please describe more about what is in the array,
to see if there is redundant storage that the first technique may help
with, and describe how you will access it. Will you do more inserts,
more deletes, or more accesses? These are the things that affect the
efficiency of various data structures.

Greg McKaskle
0 Kudos
Message 6 of 9
(3,081 Views)
Hi Greg
Thanks a lot for your answer, I know what you mean now. I still think however, that my structure is to complicated to be handled as a one array. Insted, I have made what I was thinking at the beggining. I have made a array of references by using the GOOP (object programming). I am just about to see if it works, but I have been working with GOOP before, so I think it is a good solution. I will not get too much speed, but the speed is not a trade in my project.
The given technique makes the structure more clear, because every object is just another instances of the class variable. The reference to every object I keep in the vector. I can then have a different sizes of every object without affecting other objects. That would use the me
mory most efficient and make the code look very professional and easy to maintain.

thanks again for you reply

kind regards
Pawel
0 Kudos
Message 7 of 9
(3,081 Views)

Greg McKaskle2 wrote:

When an array grows, that array may need to be moved in memory, since
arrays are stored in contiguous memory. 

Is that due to some behind-the-scenes LabVIEW coding or is that just the default structure for a LabVIEW array?
Is it possible to create some type of non-contiguous data structure?

This would be especially beneficial for inserts/deletes/removes on large data arrays.

Cory K
0 Kudos
Message 8 of 9
(2,575 Views)

Cory K wrote:

Is that due to some behind-the-scenes LabVIEW coding or is that just the default structure for a LabVIEW array?
Is it possible to create some type of non-contiguous data structure?

This would be especially beneficial for inserts/deletes/removes on large data arrays.


It's the way arrays are defined (at least arrays with constant sized elements. I'm assuming that for arrays with variable length elements, only the pointers form the array itself) and it's very useful for manipulating arrays. If it wasn't that way, you would need to go all over the place to handle the array elements and performance would be hurt.

 

You can split a large array into smaller pieces if you need to handle a large data structure. Then, working on the array would first require getting the correct chunk (which can be stored in a single element queue or a data value reference) and then working on that.

 

If you have a large array which you do inserts and deletes on, you may wish to consider another data structure, such as a linked list or a hash table.


___________________
Try to take over the world!
0 Kudos
Message 9 of 9
(2,565 Views)