08-08-2011 06:53 AM
Hello,
I am using Labview 8.2 and I am havings difficulty connecting clusters. I have a sub-vi that analyzes data and creates a 24 element cluster that is a output(indicator) that I want to insert into a larger cluster for the main vi. The main vi uses a large cluster so this is a cluster within a large clusrter. I cannot get all of the elements to transfer correctly. so here is what I have done so far to get the data to transfer correctly.
Connecting "cluster to cluster" does not work. The only way I have found to get the data to transfer correctly from the sub vi to the main vi is to use "unbund by name" and "bundle by name" for each of the 24 elements to get the data correctly.
Any ideas why "cluster to cluster" does not work?
08-08-2011 07:15 AM
Hi Kaspar,
I think the order of the data or data count is different in your both cluster. In this condition only it give error.
You can make Cluster to Cluster by using bundel function.
For more clearification please attached your VI.
Thanks and Regards
Himanshu Goyal
08-08-2011 07:23 AM - edited 08-08-2011 07:24 AM
The names of the items are irrelevant, but the datatypes and their order is important. Tip #1 is always to create a type def and use it in all instances (it's fine to use type defs in type defs and clusters and so on) and you'll avoid this problem and get automatic updates of your whole program should you need to change the cluster.
That said, you'll need to r-click the cluster border and select "Reorder controls" and make sure their numbers the same.
/Y
08-08-2011 07:23 AM
Are you using a typedef of the cluster you are passing to and from the sub VI?
08-08-2011 09:06 AM
In addition to the using typedefs I think it is also important to ask your sefl if all of this data SHOULD be in a cluster together. If you are using a very large cluster to contain all of your application's data you are probably putting too many unrelated things together. The more tightly coupled your system is the more difficult it becomes to maintain and reuse code. In addition you can run into both performance and memory issues if your data set is very large. You should try to design your clusters to contain only the necessary data for a given task/module. They should not be viewed as large buckets to simply carry stuff around.