LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

can timed loop clock be WRONG?

The only thing that seems funny to me is the use of the local variable 'write iteration'.  If your bottom loop is running 'full throttle', then you're updating 'write iteration' at a very high rate.

 

I would suggest removing the local varaiable and replacing it with a queue set to a size of one.  Use the 'lossy enqueue element' in the lower loop and dequeue the current value in the upper loop.

Message Edited by Phillip Brooks on 07-03-2009 11:14 AM
Message 11 of 39
(1,951 Views)

--- editJB Really, you found a way to limit a while loop without a timer? show us all how!---

 

the slowest operation in the while loop, here the binary write, paces the loop. Point.

 

Since you're asking, cpu load, if windows task manager is to believe, is 3-4% only. Being the machine 8cpu, that would mean less than 1/4 cpu used, if a single one would be employed. The load is anyway distributed among cpus, later on I could even post how.

 

--- Edit JB Re "NEWBIE dismissals?" OK have a good one!----

 

I don't understand this comment. Not meaning to be rude, I mean that so far I've read only the kind of suggestions that you would give to a newbie, without relation to where I'm pointing my finger at.

 

so why is that the timed loop? 

 

does it matter why? Fine, in this specific case I could have put just a normal while with a wait block, and I'll even try that one of the next days, in order to see if the readout of the millisecond tick is anything different. However, do we have a bug here with "actual start" or not? Can I in general believe timing statistics provided by the left node of a timed loop? That is a serious point, IMHO.

 

Enrico

 

0 Kudos
Message 12 of 39
(1,949 Views)

"Can I, in general, believe timing statistics provided by the left node of a timed loop? That is a serious point, IMHO."

 

Much more serious question! Point granted!

 

The answer is - "No, you can't."(point?)

 

The operating system clock is not compared to a recognized standard (calibrated.) So, its value proves exactly nothing in a scientific manner!  In fact, I know of no OS that even specifies that the system timer has any relationship to the SI second whatsoever.

 


"Should be" isn't "Is" -Jay
0 Kudos
Message 13 of 39
(1,939 Views)

The only thing that seems funny to me is the use of the local variable 'write iteration'.  If your bottom loop is running 'full throttle', then you're updating 'write iteration' at a very high rate.

 

I would suggest removing the local varaiable and replacing it with a queue set to a size of one.  Use the 'lossy enqueue element' in the lower loop and dequeue the current value in the upper loop.

 

 

Things could be certainly be written in a zillion of different ways. "write iteration" is updated, in my typical tests, some 60 times per second, as said above, which may be seen as full throttle if it was merely for displaying the number in the GUI; however, that does not seem to me an absurd update rate if some low granularity in the rate calculation is desired.

 

IIUC, the queue of size 1 would just help me in preventing race conditions on "write iteration". But that neither seems to be a problem (I have only one write and one read to the local in the whole VI), nor has apparently to do with the vale "actual start".

 

Enrico  

0 Kudos
Message 14 of 39
(1,936 Views)

The operating system clock is not compared to a recognized standard (calibrated.) So, its value proves exactly nothing in a scientific manner!  In fact, I know of no OS that even specifies that the system timer has any relationship to the SI second whatsoever.

 

 

Yes, but wrong of a factor 2? Moreover correct if a file is written to one (slow disk), and completely off if it is being written to another (fast, hogging some bus)?

I can live with the clock being generated by a quartz, and off of some 100ppm w.r.o. the atomic standard, but ~100%???

 

Enrico

0 Kudos
Message 15 of 39
(1,932 Views)

Enrico Segre wrote:
 

Yes, but wrong of a factor 2? Moreover correct if a file is written to one (slow disk), and completely off if it is being written to another (fast, hogging some bus)?

I can live with the clock being generated by a quartz, and off of some 100ppm w.r.o. the atomic standard, but ~100%???

 

Enrico


The value returned by the OS is the value returned by the OS.  Off by 100ppm, 100%, or a pseudo-randomly generated value!  LabVIEW asked for A value from the OS - the OS returned A value.  Nothing more meaningful than the simple fact that, A value was returned to LabVIEW. 

 

Whatever meaning that value has (if any) is dependant on the OS.  Whatever assumptions the programmer has about the value's meaning are without foundation.  (Point?)


"Should be" isn't "Is" -Jay
0 Kudos
Message 16 of 39
(1,920 Views)

The value returned by the OS is the value returned by the OS.  Off by 100ppm, 100%, or a pseudo-randomly generated value!  LabVIEW asked for A value from the OS - the OS returned A value.  Nothing more meaningful than the simple fact that, A value was returned to LabVIEW. 

 

Whatever meaning that value has (if any) is dependant on the OS.  Whatever assumptions the programmer has about the value's meaning are without foundation.  (Point?)

 

Nice try. But not yet point. What you say would imply that I'd have to turn to windows in this funny heavy io load scenario, and ask him why the hell it is reporting a completely arbitrary clock.

 

But it so happens, that I can also check the disk write rate, in the same conditions, with a well know test utility -iostat if I don't err - and surprise surprise it is giving me correct rates. That is, it must be reading some motherboard timer, perhaps not asking it to the OS, but that reading is correct (or that it has a smarter way of writing to the disk which doesn't hog memory conroller, or PCIe switchboard, or whatever). So what is LV doing? Could you suggest a way to check if another program, say a C executable, is getting the same wrong OS timer or not?

 

Enrico


0 Kudos
Message 17 of 39
(1,916 Views)

So what is LV doing? Could you suggest a way to check if another program, say a C executable, is getting the same wrong OS timer or not?

 

LabVIEW is recieving the value it asked for.  Yep, other timers exist!  Yep, other applications depend on other sources. NOPE, I can't offer a method to reliably compare the value retured to LV to the value returned to another program.  (and WHY? it would not change what LV gets nor provide for any corelation to a meaningful unit of time anyhow)

 

Its great that iostat (you don't err to my knowledge) has, what you consider, better timing information.  What recognized standard did you compare the iostat time to to "prove" your assumption that 'because the data is more to your liking it is more accurate?'


"Should be" isn't "Is" -Jay
0 Kudos
Message 18 of 39
(1,913 Views)

Jumping back- your question "can timed loop clock be WRONG"  Is semantically unanswerable.  A better and less obvious question is, "Can a timed loop clock be right?"

 

 


"Should be" isn't "Is" -Jay
0 Kudos
Message 19 of 39
(1,910 Views)

I recall one other problem when I was performing high speed data logging some years ago; it had to do

with the operating system buffing large amounts of data after the LabVIEW write function calls. I periodically lost data because the system was taking too long to write the huge buffer to disk.  I fixed the problem using the 'flush'function.

 

I used a quotient and remainder function along with the loop counter to periodically call the flush

function. The frequency with which to call the flush function would be dependant upon the frequency and data size.

 

Maybe the problem is not in the timed loop, but in the lower loop. 

 

Just an idea.

 

 

Message Edited by Phillip Brooks on 07-03-2009 01:02 PM
0 Kudos
Message 20 of 39
(1,910 Views)