10-14-2011 06:52 PM
I'm inserting rows in a database table as part of a logging effort for our program. The table in question has a lot of columns, many of which can be null at any particular time. As such, I'd like to decide at run time (as situations warrant) which columns get a value when I add the new row. The columns are of mixed data types... some strings, some floats, some ints, etc.
For example...
I gen the array of column names. That part's easy
How (in the world) do I create the cluster that feeds the DATA input of the "DB Tools Insert Data" VI?
Sometimes I may need an int, a float, and a boolean.
Other times I may need two ints and a string.
Thanks!
Solved! Go to Solution.
10-14-2011 07:07 PM
10-14-2011 07:46 PM
If possible you should reconsider your database design. If you have lots of records with empty data this is a sign that the table design is not very good. Effective use of link tables helps to keep your columns in a table to a minimum and allow for varying amount of additional data and types through link tables.
10-14-2011 09:21 PM
I may not have the flexability to redesign the tables. 😞 (legacy issues)
But, I'd certainly appreciate learning more so I can present the case for a redesign.
I'm not a database guy. Do you have any suggestions on how to get up to speed a bit on "link tables" or how best to approach the redesign?
Presently, each table may have dozens of columns, each holding various datum about our machine. The first column of each table is a time stamp for when a reading is taken, which happens at about 4Hz. Rather than re-record information that hasn't changed, the columns are only populated selectively for those datum that have changed. (Make sense?)
10-14-2011 10:29 PM
Given this example you could have a table which contains a the timestamp and a record ID. Most databases can assign the key automatically and guarantee it to be unique. You may also store the machine name or other identifying information specific to this entry. However, all columns in this table would be populated in every record. You can then have another table which contains the specific data for that record. This table would have a column which contains the ID from your other table. You can have multiple entries in this table for that specific ID. This table would contain the ID from the other table, a column indicating what the specifc data is (voltage, temperature, pressure, etc.) and then the data itself. If you have different data types you can have a table for each specific data type (string, double, image, etc.). If you have separate tables for the different data types you would need a key/ID for each data value record. The data tables would simply be the ID for the data attribute and the value itself.
So, table one may consist of the following columns: Table1-Key, Timestamp, Test ID
The attributes table would have the following columns: Table1-Key, Record Type, Data-Key
Your Data Tables would have the following columns: Data-Key, Value (You could have multiple Data Tables for each data type.
To get all the data for a specific time you join the results of the queries from the tables for a specific Table1-Key and the corrsponding Data-Keys from the data tables.
If your table currently has dozens of columns but you are only storing a few at a time your are wasting a ton of space for all the empty columns. Some databases are better than others at reducing this waste but link table help you to store only the specific data for a given record and not fill the database with lots of empty records.
10-15-2011 05:30 PM
Gazpacho99 wrote:How (in the world) do I create the cluster that feeds the DATA input of the "DB Tools Insert Data" VI?
What I've done before is make a copy of "DB Tools Insert Data.vi" and modify it internally to accept an array of variants instead of a cluster. Then I can programatically build an array of whatever I want to insert, leaving unused fields as Null. It was a few years ago when I did it, but I think it was quite easy to do.
-- James
10-15-2011 05:57 PM
10-16-2011 11:14 AM
James and Steve both offer good approaches. However, this is one of those cases where I'd bring LVOOP into play. Dynamic dispatch will allow you to select the child class override for "write row" fairly painlessly. and will allow easier expansion to add new data definitions or change existing ones.
10-17-2011 05:01 PM
James - thanks!
I liked the idea, and you were right... this was a fairly straightforward approach.
The other option of re-shaping the database to take better advantage of linked tables remains (for lack of a better way of saying it) on the table, but that will be tomorrow's battlefront. Right now, I'm good to go.