Data Acquisition Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

Many CAN protocols require a byte in a cyclic message to be incremented each time the message is sent (this is often byte 0). I might have read somewhere that this is possible with VeriStand but I am not using it. So when using only LabVIEW and the NI-XNET API, the only way to achieve this is to call the XNET Write function to manually set the value of this byte. But having to call the API each time the message should be sent removes all the benefits of cylic messages... Moreover LabVIEW can't guarantee the same level of speed and determinism (if the message is to be sent every 5ms for example).

Being able to configure a signal to be an auto-incremented counter would be a huge improvement. To me, this is a must-have, not a nice-to-have...

It is not possible to build Kernelmodule nirlpk 2.0  on Kernel >= 3.10

 

nimhddk_linuxkernel/LinuxKernel/nirlpk/objects/nirlpk.c:1076:5: error: implicit declaration of function ‘create_proc_read_entry’ [-Werror=implicit-function-declaration]
if (create_proc_read_entry(nNIRLP_kDriverAlias, 0, nNIRLP_procDir, nNIRLP_procRead, NULL))
^

 

create_proc_read_entry appears to have been deprecated in kernel 3.10.

 

See discussion here http://forums.ni.com/t5/Driver-Development-Kit-DDK/VM-RESERVED-and-create-proc-read-entry-deprecated-in-3-x-kernels/td-p/2937576

It would be great to develop software on 64bit Linux sytem using DAQmx.

Since we`re developing software for 64bit Linux this is a must for us - this means a 64bit Kernelmodule as well as 64bit libraries.

Task.png

 

 

It would be nice to have the ability to spawn a “Child” Task based upon a “Parent” Task local virtual channels. Today, this can be accomplished with global virtual channels, but not easily with local virtual channels within the Task. Today, we dynamically generate Tasks based upon the physical channels and save it to an external file.  There are many variations of this, but all require a decent amount of programming for complete automation. The external calibration interface in MAX has greatly improved over the years and now it is easy to calibrate multiple sensors at the same time. Not only that, but it is nice to have device setup and calibration information in one location.

 

 

 

Every time I have to work with a NI daq device the first thing i need to know is what pins can or cant do something.

Currently this involves looking through something like 7 diffrent documents to find little bits of information and bringing them back to your applicaiton.

 

A block diagram could easily be a refrence point for the rest of the documentation (you want to know about pin IO for your device look at this document)

Plus a good block diagram can tell you what you need to know quickly, and clearly. A picture is worth 1000 words?

 

Some might find the current documentation adiquite, but personally i would really like to have a block diagram that represents the internals and capiblities of the pins and device in general. Most Microcontrollers have this and it is an extremly useful tool. So why not have one for the Daq devices as well?

I use Daqmx a lot for writing .NET based measurement software.

 

Whereas the API itself is quite decent, the docs are horrible. Accessing them is convoluted at best, requiring the VS help viewer. Almost nothing is available online and decent examples are quite scarce, which will definitely be an issue for absolute beginners...

 

This definitely deserves some attention!

 

Cheers,

 

Kris

Including me, there are couple of other LabVIEW users, those wish to have this feature available, wherein we could be able to create Virtual channels (or even Global tasks) for an internal channel of a DAQ or SCXI.

This feature implementation should also include, allow to configure and use internal channels while using DAQ Assistant (though I personally don't prefer using DAQ Assistant).

 

Check this post here. This feature wish is around the same line.

Currently we are using LabWindows/CVI with a 96 bit DIO card (PXI-6509).

 

What we have found and NI support confirmed is that with the following the software needs to be aware of the bit offset during a write to one or more lines on a port.

 

Virtual Channel                    Physical Channel

dataEnable                          dev1/port0/line 2

 

Our assumption was that writing to 'dataEnable' a value of 1 using DAQmxWriteDigitalU8() would write to the virtual channel 'dataEnable'.  What we found is that is not the case.  We need to write a value of 0x04.  But that the bits that are set to zero in this value written to 'dataEnable' have no affect on other lines on the port that are already set.  This gives us the impression that the driver has knowledge of what bit position we are trying to write too.

 

So based on this why is it not possible that when I call from LabWindows/CVI to do a write to a virtual channel I cannot just do something like this:

 

Virtual Channel                    Physical Channel

dataEnable_Clk                   dev1/port0/line2:3

 

Line 2 = dataEnable

Line 3 = dataClk

 

write( dataEnable_Clk, 1)               // to set enable line high

write( dataEnable_Clk, 3)               // to keep enable line high and raise clk line

write( dataEnable_Clk, 1)               // keep enable line high and lower clk line

write( dataEnable_Clk, 0)               // lower enable line

 

** assumption is that seperate lines on another port are used to present the data to the external hardware and are not shown here.  The data would have bee setup before the sequence above and then change data and repeat sequence as needed.

 

Here I don't have to keep in mind that Enable is on line 2 and Clk is on line 3 or have to setup values of 0x04 for hte Enable and 0x08 for the Clk.  If I have to do this I would rather have direct access to each port to just write the values directly.  i know there is the register level I can use but doing this at a higher level is better.

 

In our code when a internal function is called to write data we would like to just write a value out to the virtual channel and not have to figure out the bit alignment to shift over the value to use one of the current functions.

 

Let me know your thoughts.

 

Bob Delsescaux

In order to real-time monitor and automatic debug for a exist GBIP device loop by software application, need NI provide the solution without any impact to original device loop if plug a GPIB+ card.

 

It would be nice to be able to start Ananlog Waveform Editor with a specified file loaded.

Dear NI Idea Exchange,

 

I had a service request recently where the customer wished to use a mass flow meter, using the HART protocol (massive industrial protocol used worldwide with over 30 million devices) to communicate updated values to a cRIO 9074 chassis using a NI 9208 module.

 

They did not know how they could use this protocol with our products. There was only one example online regarding use of this protocol using low level VISA functions. There is currently no real support for this protocol.

 

I suggested that if they wished to use this sensor they would be required to convert it to a protocol we do support, for example Modbus or to RS-232 (using a gateway/converter).

 

Then they could use low level VISA functions to allow the data communication.

 

They were fine with doing this but they felt that NI should probably support this protocol with an off-the-shelf adaptor or module. This is the main point of this idea exchange.

 

There is clearly a reason why we do not currently provide support for this protocol in comparison to PROFIBUS or Modbus.

 

I thought I would pass on this customer feedback as it seemed relevant to NI and our vision. 

 

Regards,

 

Dominic Clarke

 

Applications Engineer

National Instruments UK

If you set up a change detection event as so:

change detection.png

 

There isn't anything in the event data node to tell you which line triggered the interrupt. I'm proposing we add something in the event data node for this event (like a bit field or a reference to the channel) so the programmer would know which line fired the event.

 

The workaround is you do a DAQmx read at this point and you mask the data vs previous data.. but I would prefer not to do this.

When it comes to documentation of an measurement, you need to report ALL settings of a device that effects that measurement.

From a core memory dump written as a hex string to a XML document.... anything that shows up a difference in the settings that affect the measurement would be fine for documentation.

Something like a big property node readout followed by a format into string .... but make sure not to miss a property.... and a bit more complicated when it comes to signal routing....

 

A measurement that isn't sufficiently documented is all for naught. 

or

Just think of a nasty auditor 😉

 

It's so easy to make measurements with LabVIEW, please make it easy and consistent to document it.

 

Example:

A quick measurement setup with the DAQ-assistant/Express fills Gigabytes but after a certain time they are useless because nobody knows how they where taken. A simple checkbox could add all this information in the variant of the waveform. (or TDMS or ...) even if the operator don't have a clue of all the settings that affect his measurements.

 

"Without needing to clear "all" associated events, or EVEN opening MAX, I would like the ability to replace NI-USB Device "Doohickey123" serial number "junkgarbagestuff" with another NI-USB device of the same type-  perhaps a pop-up option like.... ""Replace no longer installed NI-53xx alias "gizmo"  with new NI-53xx?""  

 

Sure would help when I swap NI-xxxx devices amongst systems- especially the USB devices!

When a piece of hardware is simulated with MAX, I would like to be able to insert a transfer function or a signal simulating VI to allow me to get a more realistic test of a system. The current default of generating a sine wave for simulated acquisition only lets me test part of the code. If a transfer function, lookup table, or custom vi were able to be substituted for the sine wave generation, then I would be able to test many other facets of a system.

Multiple people have requested that there be a natural way for Labview and SignalExpress to do a rotational speed measurement using a quadrature encoder. An express VI under "Acquire Signals>>Counter Input>>Rotational Speed" that asks you basic quadrature encoder type questions and computes the rotational speed would be very useful. The information it asks would be things such as Ticks per Revolution, Decoding type (x1, x2, x4) would be useful in computing rotational speed. In addition, this can be then converted into a shipping example for DAQmx relatively easily. I have had multiple people ask this question and believe that especially within SignalExpress, this would be very useful.

 

 

Rotation.png

 

 

MAC OSX Lion (10.7) is just around the corner. NI-VISA 5.0 does not run or install on Lion. When will the new version be available for Lion?

 

Christophe

We need a way to query an output task to determine its most recently output value.  Or alternately, a general ability to read back data from an output task's buffer.

 

This one's been discussed lots of times over the years in the forums but I didn't see a related Idea Exchange entry.  Most of the discussion I've seen has related to AO but I see no reason not to support this feature for DO as well.

 

There are many apps where normal behavior is to generate an AO waveform for a long period of time.  Some apps can be interrupted unexpectedly by users or process limit monitoring or safety range checking, etc.  When this happens, the output task will be in a more-or-less random phase of its waveform.  The problem is: how do we *gently* guide that waveform back to a safe default value like 0.0 V?  A pure step function is often not desirable.  We'd like to know where the waveform left off so we can generate a rampdown to 0.  In some apps, the waveform shape isn't directly defined or known by the data acq code.  So how can we ramp down to 0 if we don't know where to start from?  This is just one example of the many cases where it'd be very valuable to be able to determine the most recently updated output value.

 

Approach 1:

  Create a DAQmx property that will report back the current output value(s).  I don't know if/how this fits the architecture of the driver and various hw boards.  If it can be done, I'd ideally want to take an instantaneous snapshot of whatever value(s) is currently held in the DAC.  It would be good to be able to polymorph this function to respond to either an active task or a channel list.

 

Approach 2 (active buffered tasks only):

   We can currently query the property TotalSampPerChanGenerated as long as the task is still active.  But we can't query the task to read back the values stored in the buffer in order to figure out where that last sample put us.  It could be handy to be able to query/read the *output* buffer in a way analogous to what we can specify for input buffers.  I could picture asking to DAQmx Read 1 sample from the output buffer after setting RelativeTo = MostRecentSample , Offset = 0 or -1 (haven't thought through which is the more appropriate choice).  In general, why *not* offer the ability to read back data from our task's output buffers?

 

-Kevin P

It would be great if the full DAQmx library supported all NI data acquisition products on Windows, Mac OS X and Linux. The situation right now is too much of a hodge-podge of diverse drivers with too many limitations. There's an old, full DAQmx library that supports older devices on older Linux systems, but it doesn't look like it's been updated for years.  DAQmx Base is available for more current Linux and Mac OS systems, but doesn't support all NI devices (especially newer products).  DAQmx Base is also quite limited, and can't do a number of things the full DAQmx library can.  It's also fairly bloated and slow compared to DAQmx.  While I got my own application working under both Linux and Windows, there's a number of things about the Linux version that just aren't as nice as the Windows version right now.  I've seen complaints in the forums from others who have abandoned their efforts to port their applications from Windows to Mac OS or Linux because they don't see DAQmx Base as solid or "commercial-grade" enough.

 

I'd really like to be able to develop my application and be able to easily port it to any current Windows, Mac or Linux system, and have it support any current NI multi-function DAQ device, with a fast, capable and consistent C/C++ API.

 

Anyone else see this as a priority for NI R&D?

Based on this LabVIEW Idea...

 

Support the iPad as an execution target

 

Adding this type of support would allow NI and others to develop applications similar to the Oscium oscope using NI DAQ hardware.