Currently the AI.PhysicalChans property returns the number of physical channels for measurement (e.g. 16). However, to calculate the maximum sampling rate for the device programmatically (e.g. 9213), we need the total number of channels including the internal ones such as cold-junction and autozero (for 9213 it's 18).
Therefore I would like to suggest to include a property node with the number of the internal channels or the total number of physical channels or something similar.
Use case: programmatically calculate the maximum sampling rate in a program that should work for multiple types of devices without being aware of their type.
Thanks for consideration and have a great day!
[actually a customer's idea]
NI should make sure that the measurement uncertainty specifications for its DAQ hardware are aligned with uncertainty analyses that are performed according the ISO "Guide to the expression of Uncertainty in Measurement" (GUM). See http://www.bipm.org/en/publications/guides/gum.htm
How feasible is the idea of setting up an open source driver project? This would be something that NI could host, but anyone could participate in to build a driver that can fit into different platforms. It can build on the DDK driver, but be centralized for collaborative effort. I like the name Open-DAQ driver. This would be a good way to address Linux users who are accustomed to source code. I know we have a DAQmx Base as a separate driver for different platforms, but an open source project would allow the Linux community to build a solution for novel or unusual Linux versions.
A DAQmx channel control allows available channels to be filtered for IO type (Analogue Input, Analogue Output etc) using the "IO Name Filtering..." function
A DAQmx task control on the other hand doesn't.
I'd find the option to filter listed tasks on IO type pretty usefull.
In dealing with multiple projects and systems that each have different sets of tasks in MAX, I think it would be very handy if you could make virtual folders in the directory style listing under "NI-DAQmx Tasks" - that way you could folder up tasks by project or section of a project instead of having a long list of task names.
Anyone else think this would be helpful? or might it cause an issue in some way?
We need a way to query an output task to determine its most recently output value. Or alternately, a general ability to read back data from an output task's buffer.
This one's been discussed lots of times over the years in the forums but I didn't see a related Idea Exchange entry. Most of the discussion I've seen has related to AO but I see no reason not to support this feature for DO as well.
There are many apps where normal behavior is to generate an AO waveform for a long period of time. Some apps can be interrupted unexpectedly by users or process limit monitoring or safety range checking, etc. When this happens, the output task will be in a more-or-less random phase of its waveform. The problem is: how do we *gently* guide that waveform back to a safe default value like 0.0 V? A pure step function is often not desirable. We'd like to know where the waveform left off so we can generate a rampdown to 0. In some apps, the waveform shape isn't directly defined or known by the data acq code. So how can we ramp down to 0 if we don't know where to start from? This is just one example of the many cases where it'd be very valuable to be able to determine the most recently updated output value.
Create a DAQmx property that will report back the current output value(s). I don't know if/how this fits the architecture of the driver and various hw boards. If it can be done, I'd ideally want to take an instantaneous snapshot of whatever value(s) is currently held in the DAC. It would be good to be able to polymorph this function to respond to either an active task or a channel list.
Approach 2 (active buffered tasks only):
We can currently query the property TotalSampPerChanGenerated as long as the task is still active. But we can't query the task to read back the values stored in the buffer in order to figure out where that last sample put us. It could be handy to be able to query/read the *output* buffer in a way analogous to what we can specify for input buffers. I could picture asking to DAQmx Read 1 sample from the output buffer after setting RelativeTo = MostRecentSample , Offset = 0 or -1 (haven't thought through which is the more appropriate choice). In general, why *not* offer the ability to read back data from our task's output buffers?
I bought a NI USB-6251 BNC but the support explained me that it would have no Linux support out of the box. I will have to find out how to use it on Linux systems myself now (perhaps with help of the forum). It would be a nice feature, if it would ship with Linux support.
In my post on the LabVIEW board I asked if it was possible to have control over the DIO of a simualted DAQ device. Unfortunately it seems this feature is not available. Once MAX is closed the DIOs run through their own sequences.
If there was a non-blocking way to control a simulated DAQ device through MAX it would permit much simpler prototyping of systems before they need to be deployed to hardware. For example if you want to see how a program responds to a value change simply enter it in the non-blocking MAX UI. Or as in my original case can make an executable useable even if you don't have all the necessary hardware.
I think this feature should only be available for simulated devices.
Thanks for reading - and hopefully voting,
Is there any technical reason why this cannot be added to DAQmx? M series boards still have features that cannot be found on X or S series such as analog current input.
Ideally, it would be best to be able to have multidevice tasks for both M and X at the same time.
I use Daqmx a lot for writing .NET based measurement software.
Whereas the API itself is quite decent, the docs are horrible. Accessing them is convoluted at best, requiring the VS help viewer. Almost nothing is available online and decent examples are quite scarce, which will definitely be an issue for absolute beginners...
This definitely deserves some attention!
Currently when streaming analog or digital samples to DAQ board, output stays at the level of last sample received when buffer underflow occurs. This behavior can be observed on USB X Series Multifunction DAQ boards. I have USB-6363 model. The exact mode is hardware-timed, buffered, continuous, and non-regenerating. The buffer underflow error code is -200290 “The generation has stopped to prevent the regeneration of old samples. Your application was unable to write samples to the background buffer fast enough to prevent old samples from being regenerated.”
I would like to have an option to configure DAQ hardware to immediately set voltage on analog and digital outputs to a predefined state if the buffer underrun occurs. Also, I would like to have an option to immediately set one of PFI pins on buffer underrun.
I believe this could be accomplished by modifying X series firmware and providing configuration of this feature in the DAQmx API. If no more samples are available in the buffer the DAQ board should immediately write predefined digital states / analog levels to outputs and indicate buffer underrun state on PFI line. Then it should report error to PC.
Doing this in firmware has certain advantages:
Doing this using other methods is just too slow, does not handle all situations, or requires additional external circuitry.
Setting outputs from software, once error occurs, is slow (~25ms / time of 50000 samples at 2MS/s) and does not handle physical disconnection of the interface. Analog output does eventually go to 0 V on USB-6363 when USB cable is disconnected, but it takes about half a second.
Using watchdog timer would also be too slow. The timer can be set to quite a short time, but form the software, I would not be able to reset it faster than every 10ms. It also would require switching off analog channels externally with additional circuitry, because watchdog timer is not available for analog channels.
The only viable solution right now is to route task sample clock to PFI and detect when it stops toggling. It actually does stop after last sample is programmed. Once that occurs, outputs can be switched off externally. This requires a whole lot of external circuitry and major development time. If you need reaction time to be within time of one or two samples, pulse detector needs to be customized for every possible sampling rate you might what to use. To make this work right for analog output, it would take RISC microcontroller and analog electronic switches. If you wanted to use external trigger to start the waveform, microcontroller would have to turn on the analog switch, look for beginning of waveform sample clock, record initial clock interval as reference, and finally turn off the switch if no pulse is received within reference time.
I’m actually quite impressed how well USB-6363 handles streaming to outputs. This allows me to output waveforms with complexity that regular arbitrary generators with fixed memory and sequencing simply cannot handle. The buffer underflow even at the highest sampling rate is quite rare. However, to make my system robust and safe, I need fast, simple, and reliable method of quickly shutting down the outputs that only hardware/firmware solution can provide.
NI provides some 100-Pin-DAQ devices, e.g. one for INDUSTRIAL DIGITAL IO
But why doesn't you offer also a basic connector block for a reasonable price, especially for industrial applictations, where it is common to wire (DIO) signals through DIN rail mounted terminal blocks?
This connector block should have the following features:
- DIN rail mountable
- simple wire connection, best with spring terminals
- 100 Pin-cable connection
- relatively small for installation in a switch cabinet
- no signal conditioning, just clamps
- much cheaper than then currently available SCB-100 block
Please see also this related idea:
The title pretty much says it all. I would like the ability to either configure a full hardware compliment as simulated devices then switch them over to real devices when the hardware arrives or go from real devices to simulated devices without the need to add new, discrete simulated devices to MAX.
This would make for much easier offline development and ultimate deployment to real hardware.
When it comes to documentation of an measurement, you need to report ALL settings of a device that effects that measurement.
From a core memory dump written as a hex string to a XML document.... anything that shows up a difference in the settings that affect the measurement would be fine for documentation.
Something like a big property node readout followed by a format into string .... but make sure not to miss a property.... and a bit more complicated when it comes to signal routing....
A measurement that isn't sufficiently documented is all for naught.
Just think of a nasty auditor
It's so easy to make measurements with LabVIEW, please make it easy and consistent to document it.
A quick measurement setup with the DAQ-assistant/Express fills Gigabytes but after a certain time they are useless because nobody knows how they where taken. A simple checkbox could add all this information in the variant of the waveform. (or TDMS or ...) even if the operator don't have a clue of all the settings that affect his measurements.
Any series card should have a feature listing different parameters like voltage, temperature etc it supports(May be a property node should be used). so that user can configure the required parameter among the supported.
Ex: SCXI -1520 module can be configured as Strain, Pressure or voltage but this information will be known only by seeing its manual or when a task is created in MAX. But in LabVIEW Software i cant get this information directly. Because it allows me to configure 1520 as temperature also and we will come to known that 1520 module doesn't support for temperature parameters only when once tried to acquire.
So what you people think about you.Share your ideas on this please.
I continually come to your site looking for the DAQmx base API manual and have yet to find it. I eventually have to dig out an old CD to find my copy.
How 'bout posting these online so that we can help ourselves out of jams?
Would like to be able to collect a couple channels of analog inputs to the iPad. This is nice but I need a minimum of 2 analog inputs and I would rather have NI: http://www.oscium.com/
Response from coorporate:
"We don't currently have anything that would meet
the customer's requirement of being able to plug in directly into the iPad for
I don't believe that the iPad supports Silverlight which is a framework developed by Microsoft. Also, wireless DAQ has to communicate with a host running DAQmx, so the customer would still need a 2nd computer even if using wireless DAQ.
If you want to connect data acquisition hardware (of any form-factor) to a machine running LabVIEW and DAQmx, then use LabVIEW Web Services to publish the front panel to the web and view/control it from his iPad.
We do have several USB products that will work with Windows-based netbooks that could be an alternative solution if topic is open to a non-Apple platform. For example, the 5132/5133 are bus-powered digitizers with much higher sample rate, bandwidth, and buffer size compared to the Oscium device. However, the price is also quite a bit higher."
Multiple people have requested that there be a natural way for Labview and SignalExpress to do a rotational speed measurement using a quadrature encoder. An express VI under "Acquire Signals>>Counter Input>>Rotational Speed" that asks you basic quadrature encoder type questions and computes the rotational speed would be very useful. The information it asks would be things such as Ticks per Revolution, Decoding type (x1, x2, x4) would be useful in computing rotational speed. In addition, this can be then converted into a shipping example for DAQmx relatively easily. I have had multiple people ask this question and believe that especially within SignalExpress, this would be very useful.