12-11-2020 11:15 AM
I have a VI that's writing data to disk from an FPGA. Unfortunatly, My SQLite writer is having difficulty keeping up with the write speed and I would like to know if there is anything I can do to keep up. Currently, I'm able to write 10,000 rows in 60ms. I would love to get down to 30ms or faster if possible.
To write the data, I have the following steps:
Please take a look at this simplified VI to see if there's anything I can do.
Solved! Go to Solution.
12-11-2020 02:33 PM
You can try to form one request into one large string:
INSERT INTO table1 (column1,column2 ,..)
VALUES
(value1,value2 ,...),
(value1,value2 ,...),
...
(value1,value2 ,...);
Or you need a faster HDD 🙂
How long do you plan to write this data? If not for a long time, then perhaps you can accumulate data in memory, and writing them down after reading.
12-11-2020 03:22 PM
12-11-2020 03:59 PM
12-12-2020 01:28 PM
Measure the write speed of a binary file.
And a workaround for solving the problem is to write a binary file, and after the experiment, transfer it to the database.
12-13-2020 02:08 PM - edited 12-13-2020 02:18 PM
PRAGMA synchronous:
0 | OFF
1 | NORMAL
2 | FULL
PRAGMA temp_store:
0 | DEFAULT
1 | FILE
2 | MEMORY
Use:
PRAGMA synchronous = 0
PRAGMA temp_store = 2
But be careful, this is a quick but not safe way to speed up.
12-14-2020 08:50 AM
@IvanLis wrote:
PRAGMA synchronous:
0 | OFF
1 | NORMAL
2 | FULL
PRAGMA temp_store:0 | DEFAULT
1 | FILE
2 | MEMORY
Use:
PRAGMA synchronous = 0
PRAGMA temp_store = 2
But be careful, this is a quick but not safe way to speed up.
Thanks!
With these changes I went down from an average of 60ms to 35ms write speed.
FYI: Trying the binary write, got me 12ms, so if this continues to be an issue, I can do the raw file with a separate conversion utility.
12-14-2020 09:23 AM
You seem to have gotten the speed that you want, but I'll say what I first thought of anyway.
Your VI that writes a row of data, call it asynchronously and enable re-entrant execution so multiple rows can be written simultaneously.
Saying "Thanks that fixed it" or "Thanks that answers my question" and not giving a Kudo or Marked Solution, is like telling your waiter they did a great job and not leaving a tip. Please, tip your waiters.
12-14-2020 10:33 AM
@FireFist-Redhawk wrote:
You seem to have gotten the speed that you want, but I'll say what I first thought of anyway.
Your VI that writes a row of data, call it asynchronously and enable re-entrant execution so multiple rows can be written simultaneously.
I'll have to test that out if I run into the issue again. I'm concerned about having multiple writers to the same file though, but I suppose with unique transaction IDs for each Async call would mitigate.
12-14-2020 11:34 AM
@JScherer wrote:
FYI: Trying the binary write, got me 12ms, so if this continues to be an issue, I can do the raw file with a separate conversion utility.
I would recommend using Bynary TDMS.
The access speed is actually the same, but it is much easier to organize data writing and reading.