LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

How Can I tell what is the latency of NI USB-9263 using python

Solved!
Go to solution

Hello.

 

I Need to know what is the latency of NI USB-9263 using python?

How Can I find it out.

Thanks!

Gil,

0 Kudos
Message 1 of 5
(1,045 Views)

In general, you should not be writing 1 sample at a time if you're looking to generate something that is time critical.

 

The latency depends on your OS being able to effectively prioritize USB transactions. Whenever you need a time-critical execution, you would go with RTOS and an FPGA and not windows OS and definitely not Python.

Santhosh
Soliton Technologies

New to the forum? Please read community guidelines and how to ask smart questions

Only two ways to appreciate someone who spent their free time to reply/answer your question - give them Kudos or mark their reply as the answer/solution.

Finding it hard to source NI hardware? Try NI Trading Post
Message 2 of 5
(1,008 Views)
Solution
Accepted by topic author gilmaor1

I think the remark about Python needs some clarification. If you need real hard real-time you have to use a solution that employs the hardware timing circuitry on a board to do all the critical timing and then have a driver interface that can buffer data (with the help of onboard buffers on the hardware itself). If you have that the application itself is pretty unimportant and you can do it in Python too, not just C or LabVIEW.

 

If you need soft real-time where you want to have a guaranteed reaction within several tenths of milliseconds rather than microseconds, you can use a real-time system and drive the hardware through this, but the driver would need to provide low latency code paths for this to work.

 

On a system like Windows there is no guarantee at all, Windows can theoretically decide to reconfigure its hardware driver system for any reason and block the entire system for multiple seconds where no process and not even a device driver can put his shoes in between to snoop away some CPU resources. Windows decided that it needs to do this thing and locks the system and nobody else can get in between until Windows finds that it is safe again. Here any application is limited not just Python.

 

The difference between Python and LabVIEW or C is, that Python is interpreted. When you do single sample analog or digital acquisition or generation, each call into the driver has to go through a whole chain of driver levels, eventually switch to kernel mode to call the actual hardware driver and then return back with the result. That takes a lot of time (in terms of computer timing). For an application written in LabVIEW or C, this whole passing through these drivers is always very much dominant and the actual code execution in the program itself requires minimal time if not programmed as a Rube Goldberg piece. In Python, the code is normally not only interpreted but every variable is an object that needs to be accessed, verified, managed and handled properly. That will usually cause the Python code to run significantly slower than the exact same code construct in C or LabVIEW. So if you are doing single sample acquisition or signal generation, Python can be indeed significantly slower than the others. But unless you do that on a real-time system, the intrinsic non real-time characteristics of the OS, make this mostly a moot point anyways, as you can not guarantee that such a single sample acquisition/generation is actually performed with a certain interval. Changes are higher if you use a programming languages that is fully compiled and with compile time variable typing, but it's not 100% ever.

Rolf Kalbermatter
My Blog
Message 3 of 5
(994 Views)

Hi Santo.

Thanks for the explanation.

Gil,

0 Kudos
Message 4 of 5
(992 Views)

Thanks!

0 Kudos
Message 5 of 5
(990 Views)