Just a note: Earlier attempts at measuring performance shows that GetDateTime is fairly heavy, it can take up to 15ms, so going with the more basic Tick count or High Resolution Relative Seconds is a lot better in tight loops. (it might just be that the actual resolution is limited to ~15ms). Regardless, do you really need to measure every loop time or is the time for a full loop run enough?
Going from memory, I recall a time when "Get Date/Time..." had a *resolution* of something like 15.7 msec. I don't think it was ever time-consuming to *execute*.
IIRC, I first noticed the resolution change to 1 msec after moving from Win XP to Win 7 (skipped right over Vista). Which was also a move from 32-bit to 64-bit OS. And I didn't notice right away, it was pointed out by a colleague b/c I'd been a long-time automatic user of Tick Count. So there may also have been a need to wait for a newer LabVIEW version to make use of the better resolution the OS was making available.
Whatever the exact time frame, "Get Date/Time..." has probably been fine for most of the last decade. (But now I'm an automatic user of "High-Res..." )
[Edit: yeah, it's probably not necessary to measure elapsed time every loop. There's some other things that could be improved upon too -- like updating the graph every loop iteration]
[Edit 2: oops. Just noticed that the code I posted was from *before* I tested and *before* changing over to "High-Res...". So, to the OP, nevermind my talk about that change. There's pretty much only the "commit" to learn about.]
Forgive me for going off topic but I wanted to try a little benchmarking...in 64 bit 2018 they all take vanishingly small amounts of time. I don't frequently get into the nitty gritty details like this though, so my benchmarking effort could be way the heck off. I can say that Get Date/Time in Seconds has at least 1 ms resolution though.