I am working with a number of different DAQ devices for experiment control in a lab environment where we need to trigger hardware timed sequences off the 60Hz power cycle. We do this by pausing and retriggering the external clock for the connected DAQs that generate the AO, AI, and DO lines. For timing reference we use a DAQ counter to measure the length of the pause with a Semi Period measurement of two pulses on a DO line of the same device, one at the beginning of the run and one at the end of the wait once things are retriggered.
The problem I'm having is that the X-series DAQs I have only measure the falling edge of the first pulse then both edges of the second. The M-series devices I have measure all four edges. I'm want to use the same code for all the devices since they are practically identical in function, but I can't now since the number of edges read depends on the device used. I can't seem to find any reason why there should be a difference in the documentation. I'd really like to make the behavior consistent so that the code base doesn't need to diverge or implement hacks.
The devices I have access to are a USB-6343, USB-6229, and PXI-6259. I'm using PyDAQmx to access the C api. I have two separate setups using NI-DAQmx 15.5.1 or 9.9. The setup code is:
self.acq_task = Task() self.acq_task.CreateCISemiPeriodChan('ubs_6343/ctr0', '', 100e-9, 200, DAQmx_Val_Seconds, "") self.acq_task.CfgImplicitTiming(DAQmx_Val_ContSamps, 1000) self.acq_task.StartTask()
The edges are then read out one at a time with a read thread using the ReadCounterF64 method to get all four edges. When the first edge is missing, timeouts and timing logic have to be changed.
The only other thing I've noticed in my testing is that the M-series devices show the first edge taking place at some large delay consistent with starting the task in software time before the hardware timed tasks are configured. The X-series devices however always have the first edge measurement give the width of the first pulse. If I didn't know better I'd think the counter start was being triggered by the rising edge, but that's not how it's configured as far as I know. Why would it only be the X-series to behave that way?
Any insight would be great!
(I've attached the full source for the curious. It's part of a larger software suite known as labscript)
Solved! Go to Solution.
Bad news. Unfortunately, the newer X-series boards behave differently than prior generations of boards such as M-series, E-series, and the TIO-series counter boards. They won't return any measurement value for "partial periods", so the first measurement value you get comes at the 2nd active edge when the first full interval completes.
On the surface, it sounds like a kinda good idea, and there are a lot of cases where it's at least unharmful and arguably even helpful. However, it's a *bad* idea for some specific circumstances, such as using an encoder as a constant-displacement external clock while doing period / freq measurement to characterize speed. Every now and then, I grouse again about this design decision (for example here, also here, and I'm sure there are some other times too.)
Unfortunately, you're either gonna have to restrict what hardware gets used or try to code around the issue. I can't quite tell whether your various signals' timing will allow you to code around it -- I do know that it isn't always possible. There are device properties that can be queried to help identify whether the board is M-series or X-series. I only know the LabVIEW api though, and don't know the C api.
P.S. I just went over to the Idea Exchange to write up my complaint and found that I had already done so a couple months back. If you agree give the idea some kudos support over there.
I figured it was some default option that had changed. Too bad it can't be configured on the user's end. Guess I'll just work around it which isn't that bad in the grand scheme of things. I'll be sure to add a kudo to the complaint.
I don't think I would mind so much if it wasn't so stinking hard to find out what had changed. I must have poured over the documentation for hours trying to figure this out on my own. Who knew it was called "Incomplete Sample Detection" and was buried in a tiny stub of the DAQmx help? Ugh, I'll increment the scoreboard (Me:0, Labview Docs:∞+1)
Thanks for your help!!