I would like to be able to have multiple and distant Labview development environments installed (e.g. Labview 7, 8.0.1,8.2 and 2010). As I understand it, this is mainly limited by the DAQmx drivers.
The problem I run into is that I need to support many applications beyond 5 years. We have some test equipment on our production line that has been running software since the 6.0 days. Management will come along and ask me to add one little feature to the software.
As it is now, I have to drag out my old computer with Labview 6.0 installed on it, develop on that, and then go back to my new development in LV 2010. I cannot just upgrade the application to 2010, for several reasons.
1) I can't have all the versions co-exist on one computer, so It needs to move from one machine to the next, upgrading along the way.
2) Different versions can change things in dramatic ways and break other pieces of code (e.g. Traditional DAQ vs DAQmx)
3) Because of #2, I need to do a full revalidation of the code, for what should be a minor change.
One thing that the NI architects do not seem to understand is that revalidation is not a trivial activity. This can interrupt the production schedule since I often cannot test on anything but the production equipment. This interruption can take days to weeks, even if no problems are uncovered, and much longer if we find that upgrading caused an issue. If I keep my old development environment, all I need to test is the changes to the code. If I change the compiler, I need to test ALL the code to be sure that the compiler change did not introduce any more bugs.
This is especially challenging in tightly controlled environments such as medical device manufacturing, where any change to the process requires a great deal of scrutiny.
Please make an effort to consider this in the future. Until then, I will be stuck with 4 computers under my desk all running different versions of Labview.
When using TDMS on cRIO systems, there are a couple of considerations that doesn't normally play in too much when storing data as TDMS files and they are:
* The current version of the file system used on cRIO controllers degrades significantly in performance if -any- folder on the cRIO contains more than ~100 files. (work-around> more elaborate folder structures, but a lot of this structuring would be only to work around this shortcoming of the (old) version of the file system
* The drives are SSD with limited life-length and wear-leveling etc. Writing and re-writing these index files add un-necessary overhead and wear on the disks
* They use up space which is (very) limited on some cRIO's (even if not much). (people may be quick to point out that you can add a thumb-drive, but down-sides to that is the thumbdrives (as far as I know) needs to be FAT. Compared to storing on the cRIO file system which is atomic and fail-safe where you pretty much don't have to worry about sudden power outages and interruptions mid-write.. on a thumb drive you would have all these issues that could worst case corrupt your whole thumb drive.)
I propose to add a boolean (default to false) on the TDMS Open called "supress writing tdms index to disk" or some smart name along those lines. What this would do is still allow for the tmds index to be created, but it will remain in memory only and never be written to disk. When the TDMS Close is called, the memory is released and the tdms file is written to disk without the index file. If the same file is opened again, extra time would be needed since the index file would be re-created (again in memory only if boolean indicates this), but I think for the most part this overhead would be more than acceptable.
I'm not sure how "simple" modifying the TDMS open and close functions would be, but I do know that there are many cases where this flag would make sense.
If you set up a change detection event as so:
There isn't anything in the event data node to tell you which line triggered the interrupt. I'm proposing we add something in the event data node for this event (like a bit field or a reference to the channel) so the programmer would know which line fired the event.
The workaround is you do a DAQmx read at this point and you mask the data vs previous data.. but I would prefer not to do this.
Measurement and Automation Explorer MAX's Test Panel's Analog Input provides a quick method to examine a signal and vary acquisition parameters. It would be useful to be able to zoom the time axis and have a cursor display so that for example noise level or rise time could be looked at in more detail. The time axis limits can currently be manually overwritten as a way to zoom but that is cumbersome. Assuming the graph being used in this test panel is built from a standard NI graph, it should have zoom and cursor capability already part of it and thus easily added.
Dear NI Idea Exchange,
I had a service request recently where the customer wished to use a mass flow meter, using the HART protocol (massive industrial protocol used worldwide with over 30 million devices) to communicate updated values to a cRIO 9074 chassis using a NI 9208 module.
They did not know how they could use this protocol with our products. There was only one example online regarding use of this protocol using low level VISA functions. There is currently no real support for this protocol.
I suggested that if they wished to use this sensor they would be required to convert it to a protocol we do support, for example Modbus or to RS-232 (using a gateway/converter).
Then they could use low level VISA functions to allow the data communication.
They were fine with doing this but they felt that NI should probably support this protocol with an off-the-shelf adaptor or module. This is the main point of this idea exchange.
There is clearly a reason why we do not currently provide support for this protocol in comparison to PROFIBUS or Modbus.
I thought I would pass on this customer feedback as it seemed relevant to NI and our vision.
National Instruments UK
How feasible is the idea of setting up an open source driver project? This would be something that NI could host, but anyone could participate in to build a driver that can fit into different platforms. It can build on the DDK driver, but be centralized for collaborative effort. I like the name Open-DAQ driver. This would be a good way to address Linux users who are accustomed to source code. I know we have a DAQmx Base as a separate driver for different platforms, but an open source project would allow the Linux community to build a solution for novel or unusual Linux versions.
"Without needing to clear "all" associated events, or EVEN opening MAX, I would like the ability to replace NI-USB Device "Doohickey123" serial number "junkgarbagestuff" with another NI-USB device of the same type- perhaps a pop-up option like.... ""Replace no longer installed NI-53xx alias "gizmo" with new NI-53xx?""
Sure would help when I swap NI-xxxx devices amongst systems- especially the USB devices!
Absolute encoders have been around for some time, but NI's motion hardware still supports only incremental encoders. I would like to see support for absolute encoders in NI Motion or NI Soft Motion.
I have a data acquisition NI-DAQmx/C++ program where I am continuously acquiring 5 channels of data at 40KHz/channel and reading them in 0.1 second chunks. This successfully works perfectly for over 14 hrs continuous acquisition, but at 14hrs, 54 min and 47 seconds the program hangs up due to an overflow in the int32 DAQmxInternalAIbuffer_Offset value sent to the DAQmxSetReadOffset() function. In the DAQmxSetReadRelativeTo() function, I set the offset relative to the first sample using DAQmx_Val_FirstSample. (See "32-bit lmitation pof the NI-DAQmx int32 DAQmxSetReadOffset() function?")
It would be very helpful for the DAQmxSetReadOffset() offset value to be 64-bits rather than the current int32 value. This would make this function analogous to the DAQmxGetReadTotalSampPerChanAcquired() which returns a 64-bit value. I understand that the offset is maintained internally as a 64-bit value, so perhaps this would not be too difficult to do.
I hope that National Instruments fixes this limitation in their API, not just for 64-bit Windows, but also for 32-bit Windows because a lot of us are still using 32-bit compilers and our users are using Windows XP. Perhaps it could be implemented as a separate DAQmxSetReadOffset64() 64-bit function for the 32-bit Windows.
A DAQmx channel control allows available channels to be filtered for IO type (Analogue Input, Analogue Output etc) using the "IO Name Filtering..." function
A DAQmx task control on the other hand doesn't.
I'd find the option to filter listed tasks on IO type pretty usefull.
Since NI already has the hardware interface devloped to USB and the PCI bus and its variants, why not leverage on that and create a line of hardware dongles that are compatible with MAX, LabVIEW and LabWindows IDE's. The volume may be low but it would be an integrated solution for ease of hardware/software interaction. Each dongle would have a unique serial number that the programmer's application can check and verify before allowing software operation.
I love simulated devices, but one major drawback is the static nature of the simulated data. It would be awsome to have the ability to import real world data for playback in the simulated devices. Essentially analog input channels could take a waveform in or waveform file, digital in could take a digital file or even a popup probe for the input where the user can turn on/off digital lines or turn a nob on an analog line would be very nice to have. This would allow the programmer to capture real data that a system might expect to recieve and then run the daq application in simulation using daqmx simulated devices with the exact real-world data.
Currently the AI.PhysicalChans property returns the number of physical channels for measurement (e.g. 16). However, to calculate the maximum sampling rate for the device programmatically (e.g. 9213), we need the total number of channels including the internal ones such as cold-junction and autozero (for 9213 it's 18).
Therefore I would like to suggest to include a property node with the number of the internal channels or the total number of physical channels or something similar.
Use case: programmatically calculate the maximum sampling rate in a program that should work for multiple types of devices without being aware of their type.
Thanks for consideration and have a great day!
[actually a customer's idea]
Would it be possible to update the export wizard in MAX so that the NI-DAQmx Tasks list under Data Neighborhood is listed in alphabetical order? In the main MAX application the list is in order, so finding tasks that are named with a common prefix is easy. However, in the export wizard you have to scroll and hope you clicked them all.
Certified LabVIEW Developer
Lead Engineer - LabVIEW
In my post on the LabVIEW board I asked if it was possible to have control over the DIO of a simualted DAQ device. Unfortunately it seems this feature is not available. Once MAX is closed the DIOs run through their own sequences.
If there was a non-blocking way to control a simulated DAQ device through MAX it would permit much simpler prototyping of systems before they need to be deployed to hardware. For example if you want to see how a program responds to a value change simply enter it in the non-blocking MAX UI. Or as in my original case can make an executable useable even if you don't have all the necessary hardware.
I think this feature should only be available for simulated devices.
Thanks for reading - and hopefully voting,
Is there any technical reason why this cannot be added to DAQmx? M series boards still have features that cannot be found on X or S series such as analog current input.
Ideally, it would be best to be able to have multidevice tasks for both M and X at the same time.
I'm working with some B class devices and can't work on the project at home because I don't have the hardware and can't simulate it. Can you add the NI USB-6008 and 9 (whole class would be nice) to the MAX 'create simulated device' list?
We need a way to query an output task to determine its most recently output value. Or alternately, a general ability to read back data from an output task's buffer.
This one's been discussed lots of times over the years in the forums but I didn't see a related Idea Exchange entry. Most of the discussion I've seen has related to AO but I see no reason not to support this feature for DO as well.
There are many apps where normal behavior is to generate an AO waveform for a long period of time. Some apps can be interrupted unexpectedly by users or process limit monitoring or safety range checking, etc. When this happens, the output task will be in a more-or-less random phase of its waveform. The problem is: how do we *gently* guide that waveform back to a safe default value like 0.0 V? A pure step function is often not desirable. We'd like to know where the waveform left off so we can generate a rampdown to 0. In some apps, the waveform shape isn't directly defined or known by the data acq code. So how can we ramp down to 0 if we don't know where to start from? This is just one example of the many cases where it'd be very valuable to be able to determine the most recently updated output value.
Create a DAQmx property that will report back the current output value(s). I don't know if/how this fits the architecture of the driver and various hw boards. If it can be done, I'd ideally want to take an instantaneous snapshot of whatever value(s) is currently held in the DAC. It would be good to be able to polymorph this function to respond to either an active task or a channel list.
Approach 2 (active buffered tasks only):
We can currently query the property TotalSampPerChanGenerated as long as the task is still active. But we can't query the task to read back the values stored in the buffer in order to figure out where that last sample put us. It could be handy to be able to query/read the *output* buffer in a way analogous to what we can specify for input buffers. I could picture asking to DAQmx Read 1 sample from the output buffer after setting RelativeTo = MostRecentSample , Offset = 0 or -1 (haven't thought through which is the more appropriate choice). In general, why *not* offer the ability to read back data from our task's output buffers?