Counter/Timer

cancel
Showing results for 
Search instead for 
Did you mean: 

Timing synchronize options between PXI E series and X series module

Solved!
Go to solution

Hi,

 

Thank you for your input. 

 

The full list of equipment are:

 

Chassis: PXI-1033

Slot2-5: PXI-4496 (not PXIe)

SLot6: PXI-6723 (not PXIe)

 

All slot2-5 will be using 40 channels of IEPE microphones + 2 voltage input (other 2 microphone that uses 48V) + 1 voltage input (back reference from analog output)

 

Slot6 will be using 18 channels of voltage output + 1 channel of reference back to input. 

 

I currently using shared trigger sync between the input and output, and all input channels is configured through channel expansion. 

 

In general, I would be running 16 kHz sampling rate for a finite acquisition (20 seconds) for each voltage output channel (it will be in a loop). The AO and AI in best case should have minimal drift and skew (I understand that DSA has its unavoidable delay -- that delay is accepted for our system but it would be nice to find out what is the delayed sample amount). 

0 Kudos
Message 11 of 33
(611 Views)
Solution
Accepted by topic author cklai11

First off, what you're doing now should let you achieve "pretty good" sync, very possibly as much as your app demands.

 

With all inputs synced up auto-magically using channel expansion, you're left with only 2 factors to consider:

 

1. The inherent filter delays built into your DSA devices.  These tend to be dominated by the digital filter stage which is best expressed in #'s of samples.   The spec sheet expresses delay in time units and shows it as a function of sample rate, but when you work through the math you find that the big term in the equation reduces to simply a # of samples.

    Two bits of good news.  First, this is a fixed delay which can be calculated and will be the same for all inputs on your DSA devices.  Second, it appears that you're feeding a reference signal out of your output device and into one of your DSA channels.  Depending what you define this reference signal to look like, you can have an additional way to measure your timing offset.

 

2. Tolerances in the timebases driving your inputs and outputs respectively.  Without looking up your specific device specs, I know offhand that many common ones are rated to 50 ppm accuracy which translates to 3 msec per minute.  A 20 second run with such devices opens you up to as much as 1 msec accuracy error on each.

 

Further note: many PXI devices will automatically sync their internal timebases to the PXI chassis clock.  My PXI experience is limited, but over a decade ago when the 6733 was already considered "very old", I recall discovering that it could *not* be synced that way.  At least not on the chassis I was using at the time, though my memory tells me it was the fault of the 6733 more than the chassis.

    I'm guessing the 6723 is of similar old vintage as the 6733 such that it might similarly not support syncing its timebase to the chassis clock.

 

It might be worthwhile to try a little harder to find ways to export a useful clock from the channel-expansion-based input task to be used when configuring your output task.  It will likely be best to do it in a way that leaves DAQmx more in charge due to the cautions I made back in msg #5.  In LabVIEW, I'd be trying something like one of these:

Kevin_Price_0-1709140599841.png

 

 

- Kevin P

ALERT! LabVIEW's subscription-only policy coming to an end (finally!). Permanent license pricing remains WIP. Tread carefully.
Message 12 of 33
(599 Views)

Using channel expansion (like previously suggested) by grouping all DSA channels into one task will ensure your synchronization of those devices. If you are trying to utilize different measurement types, say voltage and IEPE then you have to incrementally build your task. You can't combine different measurement classes into a single task (this means AI, AO, COUNTERS, DIGITAL can't be mixed).

 

You need a separate task for the AO card that you want to utilize the start trigger and sample clock of the DSA card.

 

You should go into NI MAX, and review the Device Routes for each of the cards and get familiar with the backplane of the 1033 chassis. Depending on the responses of the terminals the DSA task routes its signals to, you may need to export your signals to other terminals to get the signals to route across the backplane of the 1033. I believe the 1033 only has one bus (you should double check me on this) so you should not need to make any reservations in MAX.

 

Once you create your DSA task, Configure its sample clock to use onboard clock then query it for its Sample Clock Terminal by using the DAQmx Timing Property Node (SampClk.Term). Feed this terminal as the source to the AO task Sample Clock. If your

 

When you configure the triggering of the DSA task, you will set it to none. Query this task using the DAQmx Trigger node for its trigger terminal (Start.DigEdge.Src). Now, configuring the AO task, you want to set it to a Digital Edge trigger with the source from the other task.

 

This should configure everything for synchronization, just make sure you start the AO task before the DSA task.  

 

Regarding the delay introduced by the DSA card, check out the Filter Delay section in the manual and also check the specification sheet ADC Filter Delay. If sampling at 16kS/s, Low-Frequency Alias Rejection Disabled (its default setting) your delay will be 64 samples. Refer to the spec sheet if you enable the Low-Frequency Alias Rejection or sample less than 1kS/s. This is good news for you, so you can now effectively account for the delay without interrogating data to try and determine the delay.

 

The procedure above has worked great for me for synchronizing mixed AI cards (DSA, S-Series, X-Series) all on one chassis. It has largely been derived from the examples and white papers on synchronization. IF you need to break up your DSA task for some reason, things get much more complicated. The Channel Expansion is a lifesaver in terms of synchronizing these cards. Otherwise you need to investigate the Master Sample Clock Timebase Synchronization method to synchronize the multiple tasks.

 

 

 

 

Message 13 of 33
(596 Views)

Thanks Kavanakt7 for a great set of detailed instructions!    Just one brief followup question.  I've long been an advocate of syncing via shared sample clock alone and not bothering with a start trigger when they aren't needed.  I was inclined to think it shouldn't be needed here provided your sample clock instructions are followed, namely:

- configure the AO task to use the DSA task's sample clock

- start the AO task first

 

Just curious.  Since you clearly know your way around this stuff, I'm just checking to see if there's something I've been missing all these years that I ought to learn.   (BTW, I *do* know that a shared trigger is needed across devices when a sample clock can't be shared.)

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy coming to an end (finally!). Permanent license pricing remains WIP. Tread carefully.
0 Kudos
Message 14 of 33
(586 Views)

@Kevin_Price, it really just depends on application. In this instance, he is spanning multiple devices so to get the best synchronization out of that, I would configure triggers like described. Ultimately the end-user needs to decide if necessary or not.

 

In general, I learned DAQmx as Create Task, Configure Channels, Configure Timing, Configure Triggering, Configure Logging and begin. So I never really think about skipping one of those steps. I approach it this way because I am more aware of what is going to happen because I am more explicit. If I don't configure timing, triggering, or logging then default settings take over and I need to be knowledgeable of what those settings are and how they impact my application.

 

An example of when not using triggers may be problematic, if I have two AI tasks, with a finite acquisition of 10kS(really for ease of argument) where I have configured an external sample clock that I am providing at 10kS/s. If I don't configure triggering, no matter how I start the tasks, it is very possible that my samples don't align, could be unpredictable and observable. This occurs because the sample clock is already running prior to anything else.

 

Expanding on that example, if I Configure one of the tasks to use the other tasks sample clock, then the samples should fall back in line with one another as long as I start the slave first, then the master. Hopefully the master is not somehow generating a clock when I start the slave device. Additionally, hopefully the routing the sample clock takes across the backplane minimizes skew and uncertainty (In newer DAQmx, this does a really good job of finding an optimal route, in the past you'd route to a DSTAR line, reserve across backplane busses, then point to the corresponding cards DSTAR line for optimal routing). 

 

Lastly, if I need the external sample clock, I utilize the start triggers to hold back the acquisitions. Then, when I start the slave, it's clock is pulsing but the task is still waiting on the master to run. So then when the master runs, the slave runs as well and samples fall in line.

 

I really fall back to this article to digest synchronization, Synchronization Explained . I think there is nothing wrong with not bothering with the start trigger if you don't need it. I think it is highly probably the trigger would not be needed in this case but I fell back to the fact that I spanned multiple tasks is why I recommended using it. 

 

Message 15 of 33
(570 Views)

Sorry OP for this side conversation, hopefully you can get a little something out of it too.   Meanwhile...

@Kavanakt7, gotcha, makes sense on the habit of pretty much always being explicit about trigger config.  I treat some other stuff that way -- just always take care of doing it and you won't guess wrong about whether you needed to or not.

    There's just a couple little details I wanted to touch on a little more, highlighted in red.

 


@Kavanakt7 wrote:

Expanding on that example, if I Configure one of the tasks to use the other tasks sample clock, then the samples should fall back in line with one another as long as I start the slave first, then the master. Hopefully the master is not somehow generating a clock when I start the slave device. Additionally, hopefully the routing the sample clock takes across the backplane minimizes skew and uncertainty (In newer DAQmx, this does a really good job of finding an optimal route, in the past you'd route to a DSTAR line, reserve across backplane busses, then point to the corresponding cards DSTAR line for optimal routing). 

Have you ever found this to be an actual problem?  Or are you approaching it with extra care "just in case"?   I haven't worked with the widest range of NI devices, but never recall having an issue with a task outputting a sample clock signal before it's started.

 

As to DSTAR and backplane routing, that's a whole other level of sync that I simply haven't needed to delve deeply into.  I'm not sure I've ever worked an app that truly *needed* to establish sync below maybe 100 nanosec or so.   Good to hear that DAQmx does a pretty good job of routing smartly though.

 

I started responding to another little bit and then realized I misunderstood the scenario you were setting up.  So never mind.

 

That sync article you linked is a really good one, it's exactly the "in depth" one I had in mind back in my msg #10 response.  Thanks for joining in the conversation to help!

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy coming to an end (finally!). Permanent license pricing remains WIP. Tread carefully.
0 Kudos
Message 16 of 33
(563 Views)

It has been close to 20 years since I suspected seeing a sample clock being generated prior to me suspecting.

 

I do not believe that sample clocks are necessarily free-running on devices. With that being said, I have suspected that clocks can persist. I would suspect clearing tasks and resetting devices would shut these down. So if your application stops a task then later restarts it, all it does is resync the clock then data flows.

 

I suspect the ADCs utilize an enable pin, or some other method of stopping them from clocking data out. IMO persisting a clock even when not needed, may be occurring. 

 

I will follow back up tomorrow when I have hardware available to let you know if I am right or wrong about this. 

0 Kudos
Message 17 of 33
(554 Views)

Hi all,

 

Many much thanks to both of your advice! I had managed to make everything running and synced together. Turns out that the "export signal" is the missing ingredient that allows synchronizing channel expansion tasks between AO and AI!

 

The following attachment below is the configuration I used to synchronize between task (don't mind the filtered delay sample unwired property, I was just using that for reference purpose). Please do comment if there is something missing in my VI.

cklai11_0-1709213403020.png

Looking at the raw input form (red plot) of the ESS signal (at the very beginnning) and the reference signal measured from input (blue plot), both are properly synchronized!

cklai11_1-1709213568969.png

 

 

 

 

Message 18 of 33
(531 Views)

Just one small comment:

 

You shouldn't need to export both the SampleClock *AND* the SampleClockTimebase.  Just one or the other.  I'd prefer the sample clock itself because that only starts up when the task is started (to the best of my knowledge, which is being helpfully fact-checked by Kavanakt7).  The timebase would probably be always present.

 

You've covered yourself by *also* configuring triggering as advised by Kavanakt7, so either one should be able to work for you.  Exporting the timebase probably *requires* sharing a trigger as well whereas exporting just the sample clock probably doesn't.  But the triggering also won't hurt you any, so you may as well keep it.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy coming to an end (finally!). Permanent license pricing remains WIP. Tread carefully.
Message 19 of 33
(518 Views)

Just following up.

 

I agree with Kevin_Price, exporting of the timebase would be unnecessary in this instance. The timebase is applicable to DSA devices, so for instance, if you decided to break all of your DSA devices into individual tasks, then it would be beneficial to export the timebase since this is where the Sample Clock derives itself from. Like stated previously by santo_13,  you can't define an external sample clock for DSA devices (since it is actually the timebase driving the acquisition, the sample clock is always derived).

 

The DAQmx Export Signal node I do not believe is doing what you want. You should probe the outputs of this or attach an indicator. I believe, you should be getting an empty string response. So you are effectively still just using onboard timing (I believe passing an empty string is acceptable, it just defaults to onboard clock). I would have expected your data to be misaligned, but I suppose that the start trigger is fixing your alignment in this instance. For your case, you want to use the DAQmx Timing Node and query the SampleClk.Term, this will output the appropriate clock and synchronize the timing. How you have it now, I am curious if you run over a long period of time, if the data drifts, this would confirm your timing sources are disconnected. (If you can run for a minute, I would expect you to see something by then). You could also introduce a large enough delay that is out of phase with your sine wave to where you can see the misalignment? Lastly on this topic, I would only need to export a signal, if I wanted to duplicate it somewhere, say a PFI pin I could monitor with a scope, or route it across the backplane. I think for what we are doing, Just telling slave devices to go to the terminal of the master device is enough. I believe LabVIEW attempts to automatically route the signal across the backplane for you when you use the DAQmx Export Signal VI or Export Signal nodes. 

 

The last thing, I checked today to see if Sample Clocks can persist. With my experiment, I am using a PXI-1078 chassis, PXI-4462 (AI DSA) cards, and PXI-6123 (AI S Series) cards. I used a function generator to produce a signal, this is not of any real relevance other than confirming just sharing the Sample Clock from a Master to a Slave synchronizes the data. LabVIEW 2017 on Win7.

 

I made a quick VI and deployed it as an executable (shouldn't be relevant but just incase, I do not have LV development on our hardware machines). The VI configures a channel on a Master device for AI Voltage and a Slave device for AI Voltage. I query the Master for its Sample Clock terminal and feed it into the Configure Sample Clock.vi source terminal of the slave. From there, when I hit Acquire, I start the Slave Task, wait a full second, then start the slave task. NO TRIGGERING is setup and if the reads complete at the same time, then triggering configuration is unnecessary, if the slave completes the read before the Master, then triggering is necessary. I also have an option to do the timing with triggering.

 

This has been more difficult than you could anticipate. What works in some instances doesn't work in others. So first of all, querying the Sample Clock Terminal or the Start Trigger terminal of my DSA device is supported. It is not supported for the 6123. In short, I believe that sample clock synchronization by itself is enough. It is highly possible in the past, I thought I was setting the Sample Clock when really I wasn't. Since most my development needs to be deployed, error handling is not always taken care of the way it needs to be so errors could be suppressed. Maybe I was using a different set of cards? Either way, I think you should have the ability to notice when synchronization is compromised, and when you need to use the start trigger.

 

I have attached the VI so you all can see what I did. I do not have LV access here on a networked machine so sorry I can't drop in any snippets or anything. The VI is written in LV 2017.

Message 20 of 33
(499 Views)