08-18-2021 04:51 AM
Hi everyone, I am working in my project with the SQLite tools from Dr. Powell. I wonder, if it is possible to increase the reading speed of the database.
From his template (what I understand, the read function only works per row?). This means, if I have 1 million rows to read, this while loop should run 1 million times.
Is it possible to read 10, 100 or 1000 rows in 1 loop execution?
regards,
yan
08-21-2021 02:38 PM
Hi!
The API of the sqlite library (e.g. https://www.sqlite.org/cintro.html) only has a function sqlite3_step() to retrieve a single value.
There is sqlite3_execute(), but it also just calls sqlite3_step() multiple times.
Since that library is what's
If you look into the documentation of the SQLite API e.g. at https://www.sqlite.org/cintro.html, you will notice that at it's core, the API only supports to get one row after the other via the step() function.
There is the exec() function which will return multible results, but the documentation says that this function also just calls step() multiple times.
I guess calling this function is not really
08-23-2021 02:26 AM
Are you having performance problems, or just anticipating them?
The library works fairly quickly with prepared statements (although, of course, quickly is a relative term and quick for me may be too slow for you).
If you're already seeing a speed that is too slow for you, sharing your code (or some example with your query etc) and your database schema might lead to suggestions about possible improvements (either in SQL or LabVIEW).
Otherwise, I guess try it and see if it's too slow not. Loops can run pretty quickly sometimes 🙂
08-23-2021 04:18 AM
Hi, this is my "read" subVI (sqlite_read_table_no_verify.vi). Basically it is quite similar with the basic example.
I would later call this "read" from my main.vi, with using state machine.
And yes, I think I have already performance problems. I know the extact_column_data.vi could be the bottleneck, but I've tried without this selector case; only 1 case and it doesnt improve the performance a lot.
When I want to read 1million rows, with only 1 row per read (per loop iteration), even with 1ms per iteration, it would take 1000s theoretically?
My goal is, I want to read the data from database and displaying it to a graph. The data comes from a constant measurement of sensors in every 10 seconds. Thus, I need to extract the data, at least to display it for the last 1 year into one graph. Yes, I know 1 million rows would be too much to display, thats why I am going to combine with a decimate function in my sqlite (extract n-th row only) for 1 year display.
But however, it would be great help, to be able to extract few rows in 1 iteration rather than 1 row only? Or perhaps, it would take too much work so it is better to get 1 row per iteration?
regards,
Yan
08-24-2021 02:38 AM
I've looked into a database VI in LabVIEW 2019, and found this, but I think its not useful for my SQLite Database.
It seems it could SELECT multiple rows as 2D Array for data output.
regards,
Yan
08-24-2021 02:49 AM - edited 08-24-2021 02:52 AM
I've tried interfacing Database Connectivity Toolkit with SQLite before and i couldn't get it to work, so i think it won't work for you either.
There is a reason why SQLite has it's own toolkit.
EDIT: In DCT there's also a "DB Tools Fetch Recordset Data VI", which returns anything and everything you ask of it. If you ask it to return the whole table, it will, you'll just have to parse it out of recordset.
08-24-2021 03:54 AM
Thankyou for your inputs, and your suggestion. But I do prefer the one with "SELECT", not "SELECT ALL".
I just thought that someone in forum may had similar problem and found a way to go with.
regards,
Yan
08-24-2021 05:05 AM
If you plan to decimate data you should create a Stored Procedure on the DB server that decimates and sends the smaller set of data as answer.
08-30-2021 05:45 AM
@Yamaeda wrote:
If you plan to decimate data you should create a Stored Procedure on the DB server that decimates and sends the smaller set of data as answer.
Hi, I think I need your help for this. Because I'm seeing two different speed here:
I found out, decimation in script happens when I use rowid %n = 0, where n is the number n-th element in array to give out. In case1, I saw this effect; with bigger n, data load faster. But in case2, I keep seeing, how slow the data loads.
case 1:
case 2:
honestly, its still a bit confusing for me, to use the case2, but however, which one would you suggest me to use to get optimum speed?
regards,
yan