LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

How can I create a cluster on the fly?

Solved!
Go to solution

I'm inserting rows in a database table as part of a logging effort for our program.  The table in question has a lot of columns, many of which can be null at any particular time.  As such, I'd like to decide at run time (as situations warrant) which columns get a value when I add the new row.  The columns are of mixed data types... some strings, some floats, some ints, etc.

 

For example...

  • On one pass, I'd like to populate columns "temp", "volume", and "weight".
  • On another pass, I may want to populate columns "temp", "color", and "notes".

I gen the array of column names.  That part's easy

How (in the world) do I create the cluster that feeds the DATA input of the "DB Tools Insert Data" VI?

 

Sometimes I may need an int, a float, and a boolean.

Other times I may need two ints and a string.

 

Thanks!

0 Kudos
Message 1 of 9
(3,286 Views)
Are you just dealing with one table? If so you might want to always insert all th e columns and leave the unused columns empty. The problem is that LabVIEW does not have a null. There is an empty string and nan for numbers but I doubt they would end up as nulls in the database. The best way would be to normalize your database so that you have a table for each type of insert. Of course that is not always easy or possible because of legacy reasons. I don't think you can create different types of clusters programatically. Connecting one type of cluster to another results in a broken wire. The only thing I can think of is using polymorphic VIs.
=====================
LabVIEW 2012


Message 2 of 9
(3,283 Views)

If possible you should reconsider your database design. If you have lots of records with empty data this is a sign that the table design is not very good. Effective use of link tables helps to keep your columns in a table to a minimum and allow for varying amount of additional data and types through link tables.



Mark Yedinak
Certified LabVIEW Architect
LabVIEW Champion

"Does anyone know where the love of God goes when the waves turn the minutes to hours?"
Wreck of the Edmund Fitzgerald - Gordon Lightfoot
0 Kudos
Message 3 of 9
(3,276 Views)

I may not have the flexability to redesign the tables.  😞  (legacy issues)

But, I'd certainly appreciate learning more so I can present the case for a redesign.

 

I'm not a database guy.  Do you have any suggestions on how to get up to speed a bit on "link tables" or how best to approach the redesign?  

 

Presently, each table may have dozens of columns, each holding various datum about our machine.  The first column of each table is a time stamp for when a reading is taken, which happens at about 4Hz.  Rather than re-record information that hasn't changed, the columns are only populated selectively for those datum that have changed.  (Make sense?)

 

0 Kudos
Message 4 of 9
(3,263 Views)

Given this example you could have a table which contains a the timestamp and a record ID. Most databases can assign the key automatically and guarantee it to be unique. You may also store the machine name or other identifying information specific to this entry. However, all columns in this table would be populated in every record. You can then have another table which contains the specific data for that record. This table would have a column which contains the ID from your other table. You can have multiple entries in this table for that specific ID. This table would contain the ID from the other table, a column indicating what the specifc data is (voltage, temperature, pressure, etc.) and then the data itself. If you have different data types you can have a table for each specific data type (string, double, image, etc.). If you have separate tables for the different data types you would need a key/ID for each data value record. The data tables would simply be the ID for the data attribute and the value itself.

 

So, table one may consist of the following columns: Table1-Key, Timestamp, Test ID

 

The attributes table would have the following columns:  Table1-Key, Record Type, Data-Key

 

Your Data Tables would have the following columns: Data-Key, Value (You could have multiple Data Tables for each data type.

 

To get all the data for a specific time you join the results of the queries from the tables for a specific Table1-Key and the corrsponding Data-Keys from the data tables.

 

If your table currently has dozens of columns but you are only storing a few at a time your are wasting a ton of space for all the empty columns. Some databases are better than others at reducing this waste but link table help you to store only the specific data for a given record and not fill the database with lots of empty records.



Mark Yedinak
Certified LabVIEW Architect
LabVIEW Champion

"Does anyone know where the love of God goes when the waves turn the minutes to hours?"
Wreck of the Edmund Fitzgerald - Gordon Lightfoot
Message 5 of 9
(3,255 Views)
Solution
Accepted by topic author Gazpacho99

Gazpacho99 wrote:

How (in the world) do I create the cluster that feeds the DATA input of the "DB Tools Insert Data" VI?


What I've done before is make a copy of "DB Tools Insert Data.vi" and modify it internally to accept an array of variants instead of a cluster.  Then I can programatically build an array of whatever I want to insert, leaving unused fields as Null.  It was a few years ago when I did it, but I think it was quite easy to do.

 

-- James

Message 6 of 9
(3,220 Views)
Good idea James, another thing to consider is the raw SQL execute query vi.
=====================
LabVIEW 2012


0 Kudos
Message 7 of 9
(3,214 Views)

James and Steve both offer good approaches.  However, this is one of those cases where I'd bring LVOOP into play.  Dynamic dispatch will allow you to select the child class override for "write row" fairly painlessly. and will allow easier expansion to add new data definitions or change existing ones.


"Should be" isn't "Is" -Jay
Message 8 of 9
(3,190 Views)

James - thanks!

 

I liked the idea, and you were right... this was a fairly straightforward approach.  

 

The other option of re-shaping the database to take better advantage of linked tables remains (for lack of a better way of saying it) on the table, but that will be tomorrow's battlefront.  Right now, I'm good to go.

 

 

0 Kudos
Message 9 of 9
(3,148 Views)