Digital I/O

cancel
Showing results for 
Search instead for 
Did you mean: 

TotalSampPerChannelGenerated

I'm using an M-series 6259 under LV 7.1.1
 
I've set up a hw-timed DO task to continuously generate a short digital pattern (less than 100 states).  After starting the DO task but before starting the sampling clock, the DAQmx property "TotalSampPerChannelGenerated" returns the value 2047.  After I start the sampling clock, it seems to increment properly, even if the sampling clock task is re-started multiple times to generate multiple sequences of hw-timed DO. 
 
I've tried generating the sample clock both as a Counter / Timer finite pulse train and as a finite-sampling AO task and observed the same result.  I also upgraded from DAQmx 7.4 to 8.0 but again, the behavior didn't change.  I tried making the DO task a finite-sampling task, but the behavior was worse -- the initial offset was a different non-zero number which did not increment as the sample clock ran.
 
Here's the app: I have a master list of all the DO patterns to be generated.  For any given run of the overall app, I may start from any index in this master list, and then traverse them incrementally either forward or backward.  Meanwhile, I'm also capturing DI bits off the trailing edge of the same sampling clock.  The app needs to verify that the DI pattern is always "correct" for the given DO pattern.  To do this, I planned to keep track of the starting index and direction and then use the "TotalSampPerChannelGenerated" to let me look up the corresponding DO pattern in my master list.
 
Trouble is, can I trust the "TotalSampPerChannelGenerated" property?  If I could know for sure that it will ALWAYS have an initial offset (such as 2047), fine, I can establish that value at the beginning of the program and subtract it off every time I query.  But the fact that it gives me a "goofy" result makes me trust it less.  Soooo......
 
1. Can anyone else confirm this behavior?  Does everyone get the same value -- 2047?
2. Can anyone from NI explain the behavior?  Bug?  Intentional -- if so, why?  Can I count on that offset?  Even after a 32-bit count rollover?  (Some tests may run long enough for this to occur).
 
-Kevin P.
 
ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 1 of 10
(5,857 Views)
Forgot to include example as attachment in previous post...
 
(Set to run on M-series device with Analog Output, configured as Dev1.)
 
-Kevin P.
ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 2 of 10
(5,852 Views)
Kevin,

I was able to verify on several computers that in your example, TotalSampPerChannelGenerated did initialize to the value 2047 and count properly from there after the task begins. Why this is, I do not know, but it does look constistent. I will dig into this issue for you and let you know what I can find.

-GDE
0 Kudos
Message 3 of 10
(5,833 Views)
Normally, the TotalSampPerChannelGenerated attribute tells you how many samples have been generated from the device.  However, for subsystems that don't have their own timing engine (such as correlated digital output), there is no way for the device to know how many samples have been generated.  In this case, TotalSampPerChannelGenerated represents the total number of samples that have been written to the device.  Note that written to does not mean the sample has been generated.  It may have been generated or it may still be sitting in the devices onboard FIFO.  If you use the DAQmx Buffer property node and query the Output.OnbrdBufSize property, you'll see that the digital output FIFO size for the 6259 is indeed 2047 samples deep.
 
This obviously makes writing your application a little trickier but here are a few options that come to mind.  First, it's not clear if you're snapshotting the progress while the generation is running or if you generate a pattern for a while, stop the external clock, read the TotalSampPerChannelGenerated, write a new pattern, and then start the clock again.  If it's the latter, perhaps the easiest thing to do is read the onboard buffer size up front and when you're done generating a pattern, wait a sufficient amount of time for the PCI bus to fill the FIFO up and then subtract 2047 from TotalSampPerChannelGenerated.  Since the PCI bus doesn't provide dedicated bandwidth, there's no guarantee on how long it will take the FIFO to fill, but in practice I would expect it wouldn't take more than a few milliseconds.  If TotalSampPerChannelGenerated hasn't moved after a few milliseconds of removing the external clock, it's pretty safe to assume the FIFO is full.
 
You could also try setting the data transfer request condition to onboard memory empty.  This essentially eliminates the digital output FIFO and makes the number of samples written equivalent to the number of samples generated.  The downside is that since you no longer have a FIFO, you can't go very fast (probably 10 kS/s max).  Can you use the Current Read Position from the DI task as your index into the master list instead of the TotalSampPerChannelGenerated property?  Instead of regenerating data, can you disallow regeneration of data and manually track how many points have been written to calculate your index?  Hopefully one of these suggestions will work.  If not, provide some more details on how you would like to perform the reading, writing, and clocking of data, and I'll see if anything else comes to mind.  For example, is your master list really just a giant buffer that you begin generating data from within at a random position and then automatically increment from there, or is it a collection of repeating buffers that you sequence between based on some user command or external signal?
Message 4 of 10
(5,840 Views)
Part 1 of 2 due to 5000 char limit

Have wanted to explore M-series DO more before posting back but have been tied up in non-hw parts of the app.  I'll intersperse my comments in context:

reddog: for subsystems that don't have their own timing engine (such as correlated digital output), there is no way for the device to know how many samples have been generated.  In this case, TotalSampPerChannelGenerated represents the total number of samples that have been written to the device...  query the Output.OnbrdBufSize property, you'll see that the digital output FIFO size for the 6259 is indeed 2047 samples deep.
OK, I can understand this.  I think I remember that selecting Finite Sampling mode in the example caused the value to be 1000 instead of 2047.  Maybe this is just based on the default buffer size for Finite Sampling?  (I don't remember if I wired the buffer size explicitly in the example, and don't have LV on this network PC.)  If so, it makes similar sense. 
...it's not clear if you're snapshotting the progress while the generation is running...
Yes.  The app is basically a motor driver with verification.  I need to generate a timed pattern of 6 bits to control transistor switching, and am reading back 24 bits for verification.  The expected input 24-bit pattern sequence depends on the output 6-bit pattern sequence and the logical combination of some other static DIO bits which do not change within any given run.  I'll describe a few gory details just because it may help in finding the best solution -- my particular app is not nearly as demanding as others I can easily imagine.
 
The timing needs are fundamentally driven by a requirement that there needs to be a certain minimum delay time between turning any one transistor off and another one on (magnetic field collapse, inductive kick, etc.).  Let's call it 100 usec.  The other timing requirement is that the motor speed depends on how rapidly the various transistor switchings are sequenced.  For now, let's assume a constant speed with 5000 usec between states.  (This value remains constant throughout any single run, but can vary from one run to the next).
 
There are a total of only 12 unique 6-bit patterns to produce, which keep repeating as needed.  6 of them represent the pattern during one of the 100 usec intermediate states.  The other 6 represent the "stable states" that each remain constant for 4900 usec. 
 
My implementation plan was to tell the DO task to do Continuous Generation and then control the actual # of samples generated with the sampling clock I specify.  That'll probably come from the AO subsystem set for a Finite Generation.  I would plan to use the 100 usec delay time to set the update rate at 10 kHz, thus I would only support overall switching times that are an integer multiple of 100 usec (like the 5000 usec mentioned previously).  This limitation is acceptable to others on the project.
 
So altogether I would need a 300 byte circular buffer that I would write once and then let it be regenerated repeatedly for the correct total # of samples.  The total # of samples will always represent an integer # of switching intervals, which in this case means an integer multiple of 50.  Typical values may range anywhere from 50 - 15000 total samples, using only multiples of 50.
 
-- to be continued --
 
-Kevin P.
ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 5 of 10
(5,785 Views)
Part 2 of 2 due to 5000 char limit
 
Rewinding a bit, remember that there are only 12 unique 6-bit patterns to be generated.  I hoped to use change detection for my 24-bit pattern input to reduce the data processing load.  Rather than acquire off the trailing edge of the same clock at 10 kHz, I would instead capture patterns at the bit change rate which averages 400 Hz.  (2 transitions per 5000 usec).  My hope was (is) to query the DO task for TotalSamplesGenerated (TSG) "simultaneously" with querying the DI task for TotalSamplesAcquired (TSA, or whatever the actual name of the property is).  From TSG, I could determine what my output pattern must presently be.  That in turn would tell me what I should expect TSA and my input pattern to be.  I would continuously monitor that TSA and the actual input pattern do indeed match expectation.  I will probably need a small fudge factor to allow for software latency, but that's for a little further down the road...
 
Maybe, in retrospect, I should consider simply sampling the DI at 10 kHz.  Then in principle I could verify that it matches expectation without any reference to the TSG property of the DO task.  The entire expected sequence of 24 DI bits can be known at the beginning of each run.   Hmmm....
 
...can you use the Current Read Position from the DI task as your index into the master list instead of the TotalSampPerChannelGenerated property? 
If I go ahead and clock in the DI instead of using Change Detection, then this suggestion should work well.  Also, it wouldn't be a very big hardship to simply store the initial value of TSG (== 2047) and then subtract it off from all subsequent queries.  I generally make self-contained "Action Engine" modules for all the hw tasks, and could easily put it in there.  I just wasn't comfortable trusting this approach before understanding where the 2047 from. 
 
Instead of regenerating data, can you disallow regeneration of data and manually track how many points have been written to calculate your index? 
I don't think this will be my preferred approach due to all the DI processing I'm already committing the CPU to.  Still, for curiosity, can you describe this a bit more?  Do you mean keep track of # points written via DAQmx Write? Or do you mean that the TSG property would start from 0 (or perhaps some very small number) instead of 2047?  Just looking to learn something...
 
is your master list really just a giant buffer that you begin generating data from within at a random position and then automatically increment from there
As described earlier (though not briefly and perhaps not clearly), yes.  Except the buffer will actually be pretty small and I'll allow regeneration.  So my problem could certainly be a lot worse!
 
Thanks again for the help!
 
-Kevin P.
ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 6 of 10
(5,782 Views)

Thanks for the detailed description of your application.  There is still one thing I would like to clarify though.  If I understand correctly, your buffer will hold one sample of the 100 usec pattern and 49 samples of the 4900 usec pattern.  What's still unclear is when and how you plan to change the buffer contents from one set of patterns to the next set of patterns in the sequence of 12.  Do you plan to generate the 50 - 15000 samples with the same DO buffer while using the finite AO task as the clock source, rewrite the DO buffer to the next set of data after the AO task completes, restart the AO task, and continue to repeat this process, or do you plan to update the DO buffer throughout the 50 - 15000 sample generation? 

If it's the former, I would probably recommend not using change detection and instead use a correlated DI task to track progress.  This eliminates the complexity of reading the TSG property on the DO task (which is always a moving target) and then trying to correlate that back to the data acquired through the DI task (which is also a moving target).  It is more processing, but you're throughput requirements are fairly small so I don't think it should be that taxing on the system.  If you still want to try change detection and are using the ao/SampleClock as the clock source of the DO, you can use the TSG property on the AO task instead.  Since the AO task has dedicated counters for its timing engine, the TSG property for AO will reflect the number of points clocked out and not the number of samples written to the device.  You'll still have to deal with the "snapshotting" problem, but at least you'll have eliminated the uncertainty of the device FIFO.  I would caution you against trying to use the DO TSG property and just always substracting 2047 to compute the actual number of samples generated.  The 2047 samples defines a window of uncertainty and not a fixed offset.  While the driver will try to write data to the device anytime the device FIFO isn't full, whether it is able to do so or not is determined by the amount of traffic across the PCI bus.  This means there can be anywhere from 0 - 2047 samples in the device FIFO at the time you read the property.  One last thing to consider when using a continuous DO task with regeneration, if you plan to update the DO buffer while the clock isn't running and then restart the clock, you'll still have old data sitting in the device FIFO unless you first stop the task.

If your plan is to try to do the latter where you're updating the buffer while actively clocking out data, you'll need to consider what latency you're willing to live with when updating data (e.g. upon writing data to the buffer, how long does it take the new data to show up on the output).  With your current configuration of buffer regeneration allowed, a 50 sample buffer, and a 2047 sample FIFO, you will need to wait 2096 samples (2047 + 49) or 2096 usec worst case before the new data you wrote is output.  If this isn't acceptable, you'll have to make some trade offs.  You can either:

1.) Generate a larger buffer by duplicating data and generating at a higher sample rate.  The downside here is that it requires additional bus bandwidth and processing power if you use correlated DI instead of change detection.

2.) Change the default data transfer request condition.  The default is onboard memory not full.  You could change it to Onboard Memory Half Full or Onboard Memory Empty.  Changing to empty provides the least amount of latency, but you also achieve poor overall throughput since you have a jitter tolerance of only one sample before an underflow condition.  When I've tried using transfer conditions of empty in the past, I usually only see throughput rates around ~10 KS/s so you may be pushing it with this configuration.

3.) Disallow regeneration of data.  The downside here is that you have to constantly write new data at a rate that keeps up with the sample clock rate.  However, since regeneration is disallowed, you directly control the latency by choosing how much data you write to the buffer at a time.  Another advantage of this method is that if you're directly controlling the clock, you can precisely write N samples and clock out N samples without generating underflow conditions or repeated data.

I hope some of this information will help with your application.

0 Kudos
Message 7 of 10
(5,770 Views)
reddog,
 
Thanks again for the detailed help & descriptions.  I think I know exactly which way to go now.
 
If I understand correctly, your buffer will hold one sample of the 100 usec pattern and 49 samples of the 4900 usec pattern.  What's still unclear is when and how you plan to change the buffer contents from one set of patterns to the next set of patterns in the sequence of 12.  Do you plan to generate the 50 - 15000 samples with the same DO buffer while using the finite AO task as the clock source, rewrite the DO buffer to the next set of data after the AO task completes, restart the AO task, and continue to repeat this process, or do you plan to update the DO buffer throughout the 50 - 15000 sample generation?
The app will not need to update the DO buffer throughout the 50-15000 sample generation.  The sequence of 12 patterns is known ahead of time and it would go: 49 samples of pattern 1, 1 of pattern 2, 49 of pattern 3, 1 of pattern 4, 49,1, 49,1, 49,1, 49,1.  The entire sequence can be represented as a total of 300 samples which will keep regenerating in hw as needed.  There'll be special handling for cases with fewer than 300 total samples.
 
See?  It's not nearly as tough as it might have been.  I don't need to make decisions on the fly to choose a sequence of patterns to generate, nor do I have a large set of unique patterns to contend with.  Probably most importantly, by generating a pre-known sequence I don't have to concern myself with latency.
 
Thanks for the suggestion of using the AO task's TSG property -- I'd like to think I'd have thought of that myself sooner or later, but who can say?  It's especially important to be aware that TSG *might* give me any value from 0-2047 if I query it before the board has finished copying data from system RAM to its on-board FIFO.
 
-Kevin P.
ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 8 of 10
(5,757 Views)
Have you thought about counting your sample clocks? If your DO clock is coming from the ao/SampleClock, you could use that as the source of an edge counting task. If one DO sample is produced for every ao/SampleClock, then the edge count read, mod buffer size, should be a valid index into your DO buffer for which DO sample was currently being generated. But, as reddog says, it's a moving target.
0 Kudos
Message 9 of 10
(5,741 Views)

Once again, thanks for all the previous detailed help.  I've got everything up & working properly so I figured I'd update & close the thread out.

I had to abandon the "change detection" approach for reasons I failed to anticipate -- signal propagation time differences.  The 24 input bits I wanted to read were NOT synchronized by our external hardware.  The signal propagation times were just different enough that I would sometimes get 2 or even 3 different change-detection events from 1 output pattern change.  The 2 or 3 changes were all in response to the same 1 output pattern change, but the transitions weren't quite in sync.

So I decided to try to simply use fixed-rate hw clocking instead.  I setup my correlated DIO clock at a 90% duty cycle, generating DO patterns on the leading edge and measuring DI patterns on the trailing edge.  I found that I was able to perform the necessary pattern matching at 10 kHz without drastically bogging the system down.  I've stuck with constant-rate sampling ever since, with the added benefit of making it much simpler to correlate the output and input patterns to one another.

-Kevin P.

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 10 of 10
(5,506 Views)