10-18-2010 10:07 AM
Deepak,
a queue has the advantage that everything, which is written to it, comes out the very same order. So you may have delays in writing, but it is still the same order.....
And, no, i am not talking about cloning anything. You just obtain the same queue several times (in your example 100 or more times per PC) and write data to it (one per second per instance). A single reader (dequeue) will put that data into the DB.....
hope this helps,
Norbert
10-18-2010 10:14 AM
@Norbert B wrote:
Deepak,
a queue has the advantage that everything, which is written to it, comes out the very same order. So you may have delays in writing, but it is still the same order.....
And, no, i am not talking about cloning anything. You just obtain the same queue several times (in your example 100 or more times per PC) and write data to it (one per second per instance). A single reader (dequeue) will put that data into the DB.....
hope this helps,
Norbert
His app is scattered across multple PC so a queue would only help in each machine.
If I understand correctly check image and don't trust me) the number of I/O per second are the same but the number of connections (1 PC vs 5 PCs) is different.
Unless there is something about that version of the DB and multiple connections...
Ben
10-18-2010 10:17 AM
Ben mentions a couple of good thoughts, take some timing on a single database query, and compare if you make a parameterized query.
If you only need to log once per second, but from 20 sources i'd still recommend to bunch those together to a single query instead of pushing 20 queries, it's possible the parameterized is fast enough.
Keeping the database open is a good call, it'll go faster, but as mentioned above it still requires network time and waiting for result for each.
/Y
10-25-2010 09:59 PM
100 clones distributed on 5 machines works fine, and the machine load is low, so I guess it's neither the DB limit nor Database toolkit VI overhead limit.
Did you miss changing some subVIs of Database toolkit to be reentrant?
10-26-2010 01:36 AM
If you will convert all DB Toolkit VI's to reentrant, it will through error. It ia calling ActiveX that doesnt support reentrancy. Also notice, if u run 100 clones on a single machine, you wont be able to insert 100/second into database...
10-26-2010 04:26 AM
Yes, I am suffering from this error now.
When I change all the VIs to Reentrant execution- Share clones between instances, it works with 20 clones or 40 clones but when I run 60 clones, it returns error, very unstable.
The problem is that if some subVI is not reentrant, it might be the reason that slows down your application.
10-26-2010 09:20 PM
I write a lightweight Database insert VI. See the attached VI.
I manipulate ADO object directly in this VI, and it works. Hope it can help you.
I test it using SQL Server, the summary:
20 clones -> 1201 records per min
60 clones -> 3612 records per min
100 clones -> 6137 records per min
10-29-2010 01:00 AM
Hi Ktian,
We test the vi's you send. It works fine for 100 clone with only 2 columns(Fileds) in the table.
But when we tested it for 1000 clones with 52 columns in DB table, it was inserting approx. 8560 rows/min.
10-29-2010 02:22 AM
Is the database and your application in the same machine? Is the network good? Is the computer memory big enough? Is the CPU fast enough? Is you data too large? Can you please narrow it down?
I tested it in my computer, with SQL server, it wrote 58997 rows per min by 1000 clones, but I tested it with only two columns. How do you change the example VI to insert 52 columns, can you give me a snapshot of your VI?
10-29-2010 02:38 AM
@D.S wrote:
We test the vi's you send. It works fine for 100 clone with only 2 columns(Fileds) in the table.
But when we tested it for 1000 clones with 52 columns in DB table, it was inserting approx. 8560 rows/min.
52 column of floats, approx 10 chars/column and some SQL administration ~ 600 bytes
1000 clones => 600 000 bytes
8560 rows => 5,136 Gbytes = 51,36 GBit/min = 85,6Mbit/sec.
You're on a 100Mbit network?
/Y