From 04:00 PM CDT – 08:00 PM CDT (09:00 PM UTC – 01:00 AM UTC) Tuesday, April 16, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

How do I determine the appropriate buffer size to use to generate a particular frequency waveform?

I am trying to generate a continuous waveform at all frequencies ranging from 100Hz-20kHz. The standard buffer size of 1000 works well for multiples of 100 between 100 and 1000Hz and for multiples of 1000 from 1kHz-20kHz. However, when I try to generate waveforms at intermediate values such as 1800, the output viewed on an oscilloscope shows discontinuities. The errors disappear after some tinkering with the buffer size, but is there a way to calculate the best buffer size for a given frequency?
0 Kudos
Message 1 of 10
(8,905 Views)
If you are regenerating the same buffer, then you want to make sure the buffer has a whole number of periods. If you have 2.5 periods in your buffer and are regenerating this same buffer, you would get the middle of the third period and then start over at the beginning of your 1st period, giving you a discontinuity. If you are not regenerating the same buffer, then as long as you are picking up where you left off as far as you data goes, you should see the discontinuity. For our previous example, if your next buffer's first point starts at the 2nd half of the 3rd period, you shouldn't see a break in the waveform. Below is a link to a knowledge base that discusses how to calculate the resulting output frequency. I hope this information helps. Respond if this d
id not answer your question or if you have additional questions.

Calculate Frequency
0 Kudos
Message 2 of 10
(8,905 Views)
Cool, I've been waiting for someone to ask this question b/c I put together a fairly nifty solution to it when I faced the same problem. Assuming, that is, that you're making an output buffer representing a waveform with one particular frequency and then doing a continuous output that cycles through that buffer multiple times. (You would be resetting and reprogramming your board to generate a waveform representing a different frequency.) Sound right so far?

The first key concept is to be prepared to store multiple cycles of your waveform in the output buffer. The best # will depend on your waveform frequency and your output update rate.

For example, suppose you have an update rate of 100 kHz and you want to produce a 6 kHz output waveform. Theoretically, one cycle of your output waveform lasts for 100/6 = 16.6666... update periods. Since you can only generate integer #'s of updates, the solution in this case is to define an output buffer containing 3 cycles of your waveform. This will last for 3*100/6 = 50.0000 update periods. Bingo!

Now let's consider the general real world case instead of a convenient example that works out perfectly. You need to determine how to fit D full cycles of your frequency f_w waveform optimally into N of your output update periods which cycle at frequency f_o. In other words, you're trying to satisfy:

D * (1/f_w) ~= N * (1/f_o)

This is mathematically equivalent to a problem in rational approximation. Specifically, you will be trying to determine the most appropriate choice of N and D to most closely approximate the fractional ratio between f_o and f_w. In other words,

(N / D) ~= (f_0 / f_w)

The prior link outlines an algorithm that gives a successive approximation, i.e., each iteration provides an N and D that get progressively closer to the true fraction.

In the real world, there's a practical buffer size limit on how large you want to allow N to become. That could be one way to decide when to terminate the approximation. Another way could be based on the percent error between your approximation and the true fraction.

In my past job, I had to perform this approximation in real-time and wanted very consistent execution time. So my solution is based on having solved the problem for 1, 2, and 3 terms in closed form. Even for the very worst case of approximating sqrt(2) (as outlined in the link), the approximation is good to 1% using only 3 terms. In other words, the theoretical time required to define D cycles of the waveform would be N update periods, with an error no greater than 1% of an update period.

I'll post my subvi if I'm able, but I may need to password-protect the diagram. If I recall correctly, the inputs provide user options for minimum requested accuracy and maximum allowed N, and there are outputs for N, D, and a boolean to tell whether the accuracy request was satisfied.

-Kevin P.
CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
Message 3 of 10
(8,905 Views)
Kevin,

That sounds like a neat example. If you are able to post it, you can upload it Example Code Library:

http://www.ni.com/devzone/dev_exchange/ex_search.htm

Matt P.
Applications Engineer
National Instruments
0 Kudos
Message 4 of 10
(8,905 Views)
Ok, but first I'd better, uh, fix it.

I tried it out last night for a wider range of inputs than I originally needed to accomodate, and found a few that didn't work out correctly. I had made the subvi more universal than it strictly needed to be, but had primarily tested it for a fairly restricted subset of inputs.

I'll try to rewrite it as an iterative algorithm, as mentioned in my first posting. That way I could post it for sure.

-Kevin P.
CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 5 of 10
(8,905 Views)

I did a little more google-searching and found a couple newsgroup postings that were even more useful than the original link. The articles are embedded in the diagram of the two implementations given below.

User-controlled inputs are:
1) original floating point value
2) requested max error
3) max allowed value for numerator
4) max allowed value for denominator

Outputs are:
1) floating point value of rational approximation
2) numerator of rational approximation
3) denominator of rational approximation
4) actual error
5) boolean flag telling whether error spec was met
6) # of terms / iterations to achieve approximation

The first (and recommended) implementation is based on a continued fraction repr
esentation of the original floating point value. In my testing, the slowest execution time was about ~5x the fastest.
I left several array indicators on the front panel that helps show the progression of the approximation through all its iterations. So you can check out successive approximations for pi or something.

The second implementation is based on iterating through a sequence of "Farey" fractions. It's fastest case can be several times faster than the continued fraction method, but there are other cases that slow it down by a factor of 1000+. (It seems to be worst when the floating point value is very close to an integer, a situation that may arise fairly often.) Because of its virtually unbounded execution time, I advise against using it. Consider its inclusion here to be solely for educational purposes.

As suggested earlier in the thread, I'll post the continued fraction version as a piece of example code.

-Kevin P.

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
Message 6 of 10
(8,905 Views)

After recently linking to this thread, I noticed that I could no longer download the example vi using Chrome.   I decided to repost, and in the process I did a little bit of minor tweaking as well as making a wrapper that helps translate from the terminology of rational approximation with its talk of numerators and denominators over to the terminology of DAQmx with its talk of buffer sizes and # of waveform cycles.

 

In the process, I also wound up upcompiling the code to LV 2016.   FYI.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 7 of 10
(6,446 Views)

More than 30 years ago, before the era of PCs, when 65kB was as much memory as you could address directly, I programmed a "sum of sines" algorithm for a behavior stimulus that covered a frequency range of 0.02 - 1 Hz.  The technique we used was to generate a waveform of 16,384 points that represented the 7th, 13th, 23rd, ... (all relatively prime harmonics, either 8 or 10 of them) played back so that the fundamental ranged from 0.001 to 0.01Hz (some of these number might not be quite right -- it's been a long time).  The point is you need to have a buffer large enough to have an integer number of periods inside it.  If it is a single frequency, you need one period's worth of points.  

 

Bob Schor

0 Kudos
Message 8 of 10
(6,441 Views)

And yet another update.  The topic came up again, I pointed here, and in the process realized that the wrapper utility I posted recently didn't have any terminal connections.  I added those and then added yet another layer to demo how to connect it to an actual AO sine wave generation task.  The zip below contains both the vi's posted above (after I added wiring terminal connections to one) plus a vi that integrates them with an AO task.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
Message 9 of 10
(6,370 Views)

A few years later and here's another small update.  

 

- Now attempts to auto-discover the board's master timebase.  Uses front panel control value only if auto-discovery fails.

- Changed from Continuous Sampling to 3 buffers worth of Finite Sampling so the demo can stop on its own

- Added a few front panel indicators and clarified some front panel labels

 

It's also worth noting that this whole exercise filled more of a void back in the "old days."  At some point DAQmx made non-regenerating AO tasks *much* more robust and reliable in the years since I originally posted this method.  So today if you need a glitch-free solution, you could probably go with a non-regenerating Continuous Sampling task where you compute and write blocks of data on the fly for as long as the AO needs to last.

    Of course if that's a very long time, the attached utility does the dirty work to fit the right # of waveform cycles into a buffer and let you use regeneration, which is even easier.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
Message 10 of 10
(3,028 Views)