From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Tick Count (Linux)

Hi all!

I'm trying to bear an application from PharLap to Linux RT. When i try to deploy the application, i get the following error: Deploying nirviCommon.vi loaded with errors on the target and was closed. 

This error is due to the vi Tick Count (Returns the value of a free running counter in the units specified). It seems this vi is not supported in Linux RT. Which function or vi should i use to replace the unsuported function ? The function Tick Count (ms) seems to work, but i only have ms precision instead of µs or ticks...

 

Any idea ?

 

0 Kudos
Message 1 of 9
(3,091 Views)

You should be able to setup a Call Library Node to call the Linux HRT timer functions. Haven’t a Linux instalation handy but you should enter libc.so (or possibly just c.*) and as function name clock_gettime(). The second parameter is a struct with an int64 as seconds followed by an int32 for nanoseconds.

 

http://man7.org/linux/man-pages/man2/clock_gettime.2.html

Rolf Kalbermatter
My Blog
Message 2 of 9
(3,012 Views)

Hi!

Thanks for your answer.

I tried to call the dll with libc.so, libc.*, c.* .... i always get the error LabVIEW: (Hex 0x436) Failed to load shared library . Finally, t works with libc.so.6

Thanks for your help

0 Kudos
Message 3 of 9
(2,968 Views)

Hi again

I just wanted to tell you that the function Get Date Time In seconds also works in Linux RT, and seems to be more performant. 

By the way i have two another questions :

1) I have the same problem with the function Wait (µs). I found the function in Linux Clock_nanosleep: http://man7.org/linux/man-pages/man2/clock_nanosleep.2.html

which seems to works. The problem is the call of the function already taks 60 µs , therefore, if i call the function in order to wait 100 µs it will wait 160 µs instead ... do you know any other solution ?

2) Do you know if there is a Linux RT forum in order to ask all these types of function, i think i will have many other questions because our application is quite large (more than 1500 vi).

again thanks for your help

0 Kudos
Message 4 of 9
(2,946 Views)

Is that with static library name in the Call Library Node configuration or with library name supplied through the diagram? Have you made sure to set the Call Library Node to execute in any thread?

 

Many of the libc functions are system calls which will generate a hardware interrupt to switch to kernel space and execute the function and afterwards has to switch back to user space. That is kind of expensive but 60us sounds pretty long even for that. Newer kernels use various techniques such as vDSO, which maps a small shared library into each process to update frquently used kernel data, so that the kernel context switch is uneccessary and for modern CPUs it can also use a different method for context switching that causes less overhead than the old interrupt method. But NI Linux Realtime is not using the newest and greatest kernel and might not support all these features.

 

I did a test on an older Linux ubuntu 12.7 LTS installation inside a virtual machine, 32-bit Linux, LabVIEW 8.6, Intel® Core i7-6600U CPU @ 2.60GHz. The kernel running is 3.2.0-126.

 

This has an overhead of about 73 us when calling the nanosleep() function. Calling librt:clock_nanosleep() instead has exactly the same overhead and the lbc:nanosleep() most likely forwards to librt:clock_nanosleep() directly.

 

One option to make it more accurate would be to use the clock_nanosleep() function with the TIMER_ABSTIME (1) flag. It makes the things a bit more complicated as you first have to call clock)gettime(timerID, timespec) and then add the interval you want to wait to the timespec value retrieved that way. This still won't allow to wait less than the overhead time but it will avoid cumulative errors if you want to wait a fixed interval repeatedly by using the previous nanosleep() value to add the new interval.

 

There is a NI Linux RT forum but that is mostly for LabVIEW related questions and not so much about calling shared library functions. Generally there shouldn't be a big difference between Linux and NI Linux RT although you have of course the fact that NI Linux RT (currently at kernel 4.9.47) might not contain the latest and greatest feature from mainline kernel which is already at 5.4.2 stable and 5.5-rc1 mainline.

Rolf Kalbermatter
My Blog
Message 5 of 9
(2,923 Views)

Hi,

At first, thanks for this detailed answer,

I configured the library call through Call Library function Nodes,and the call library node was set to execute in any thread.

 

I tried your solution i.e. calling clock_gettime adding the delay and then calling clock_nanosleep with falg equal to 1. Unfortunately, i still have the same problem , if i set a delay of 100 µs the function wait between 155 and 160 µs instead.

 

 

 

0 Kudos
Message 6 of 9
(2,905 Views)

The abstime doesn’t avoid the delay for the first call of course, it makes it supposedly even worse as the gettime() call adds some overhead to. What it is supposed to do is avoiding the extra delay in each subsequent call if you instead of reading each time another gettime() instead use the fimespec value from the previous loop iteration to add your desired delay to!

 

And your timing measurement is totally flawed. You never ever calculate the time of a function call by measuring one execution but instead take a few thousand or more executions and measure the total time and then divide that with the number of iterations. It’s highly unlikely that the Get Time in LabVIEW had both an accuracy and resolution in the range of single microseconds so the error of these calls adds substantially to your measurement. Thumb of rule: If you benchmark a function loop it to make it last at least a second and then measure the total time and divide that by the number of iterations. Less than that has a to high inaccuracy from your benchmark code itself!

 

Rolf Kalbermatter
My Blog
0 Kudos
Message 7 of 9
(2,901 Views)

Hi

thanks for your quick answer.

 

I totally agree with you, if you want to measure the performance of a function, you have to do the mean over multiple call .

In my exemple, i didn't want to measure the performance of the functions clock_gettime() and clock_nanosleep(), i just wanted to know if the function wait works i.e. if i want to wait 100 µs will it wait 100 µs or more... i was doing functionnal test not performance test. 

 

0 Kudos
Message 8 of 9
(2,889 Views)

I would say that your argumentation might be ok for things that take significantly longer than a millisecond or so but for something whose execution time is in the same order of magnitude than what the Get Date and Time function itself can possibly provide, it is pretty close to guessimating.

Rolf Kalbermatter
My Blog
0 Kudos
Message 9 of 9
(2,881 Views)