While I realize that there is already a third party option for this, it only makes sense that NI open an option for the cRIO users out there that can do what this module does...
in a cRIO platform module. That way we can have a North Anmerica source for this very important data input device.
Optimally two - four channel input on a single module design.
We're running to issues on a regular basis where the 8360 card to the laptop comes out, get's moved etc. Once the connection is lost, a reboot seems the only way to establish a connection again. This results in too much wasted time.
Not knowing what lies beneath and the complexities involved, is there any way to make a hotswappable HW for a PXI connection for laptops?
Hello. I'm working on an app to interface with a couple of ANT devices (Garmin Vector, Garmin heartrate monitor). I've seen a couple of posts on this topic but nobody has posted code. I talked to Frank Lezu at an NI day in DC a month or so back and he recommended I post about it here.
Anyone else looking for ANT/ANT+ support? I'd be happy to share my code when it's not in a ridiculously embarrassing state but for now see this post for a braindump of my progress.
The size of for example the NI-Rio driver package is 4GB in the most recent version which is comparable to size of common operating systems. This is too much in my opinion if someone needs only a specific driver for a specific NI hardware. Therfore i suggest granularity reduction of driver packages to a more mouth friendly morsel (for ex. 200MB max).
Many CAN protocols require a byte in a cyclic message to be incremented each time the message is sent (this is often byte 0). I might have read somewhere that this is possible with VeriStand but I am not using it. So when using only LabVIEW and the NI-XNET API, the only way to achieve this is to call the XNET Write function to manually set the value of this byte. But having to call the API each time the message should be sent removes all the benefits of cylic messages... Moreover LabVIEW can't guarantee the same level of speed and determinism (if the message is to be sent every 5ms for example).
Being able to configure a signal to be an auto-incremented counter would be a huge improvement. To me, this is a must-have, not a nice-to-have...
When a cDAQ (I'm using both a 9184 and 9188) chassis is energized (or the host computer is rebooted) it is programmatically read as reserved (by either MAX or LAbVIEW program). To gain control of the chassis, one has to either use MAX (MAX deosn't save or remember the previous reservation) and reserve it or programmatically force the reservation in the LabView code. In addition, if a chassis is reserved by a different host, another host can force the reservation by itself programmatically. Both of these can be accomplished by using the reserve chassis function with the 'Override Reservation' input set to True. This really is not a good method - it's effectively a hostile-takeover of the hardware (I've tried this and I can literally reserve hardware that is actively being used by another host).
I would recommend the following firmware/driver/software updates/corrections:
All of the JAT's VIs output results as "sequence" and "timestamp", eg. the "Max and Min Voltage.vi".
I use the JAT to analyze high speed differential signals with unit interval of just 300ps. Because of this, the timestamp output cannot meet the required resolution. However, if timestamp is replaced with double precision float, it should be able to.
I have brought this up with NI's tech support and this is what they recommend, which is to suggest this change over here.
This is pretty trivial to achieve through LabVIEW itself, but...
Signal Express is a simple, stand alone data acquisition system that allows those with limited exposure to LabVIEW set up simple test and measurement routines. One area where this is ideal - at least, for me - is in environmental or long life testing. Instead of crafting a beautiful piece of custom software for my colleagues, I can hand them a DAQ, point them in the direction of the SignalExpress and DAQmx installers, and off they go. With a little fiddling, they can create a logger that suits their needs.
One thing I've noticed, however, is that when sampling with non-simultaneous cards such as the USB 6225, users will select 1-pt-on-demand, set to some big interval, and then come back screaming at the top of their lungs - "OHMYGOD THERE'S CROSSTALK BETWEEN CHANNELS!". With a little bit of fault-finding, it's easy to point out that it's not crosstalk, but ghosting between channels, because I would guess that 1-pt-on-demand uses interval sampling and rattles through the multiplexing as quickly as it can.
My idea: give users the option to either select round-robin mode with a sensible delay, or complete control over the interchannel delay.
I realise that the standard line is usually "use LabVIEW" - I do - but I'd rather spend my time working on the important stuff and empowering those with less experience and/or exposure to make accurate measurements.
I use a PXIe-6363 which a wonderful device. But it lacks level shifting at the digital I/O.
I would recommend that most DAQ multi-io devices support programmable and externally driven level-shifting for digital IOs. Range for DAC driven level-shift (0.8 - 3.6, 5V), and support for external input. It would also be nice if multiple ports are present that some of them allow independent logic levels. Default level should be 3.3V. Port configurable pull-up, pull-down and latch-hold.
It is not possible to build Kernelmodule nirlpk 2.0 on Kernel >= 3.10
if (create_proc_read_entry(nNIRLP_kDriverAlias, 0, nNIRLP_procDir, nNIRLP_procRead, NULL))
create_proc_read_entry appears to have been deprecated in kernel 3.10.
It would be nice to have the ability to spawn a “Child” Task based upon a “Parent” Task local virtual channels. Today, this can be accomplished with global virtual channels, but not easily with local virtual channels within the Task. Today, we dynamically generate Tasks based upon the physical channels and save it to an external file. There are many variations of this, but all require a decent amount of programming for complete automation. The external calibration interface in MAX has greatly improved over the years and now it is easy to calibrate multiple sensors at the same time. Not only that, but it is nice to have device setup and calibration information in one location.
DAQ Hardware like NI PCI-6225 has 8 Correlated DIO (port0) but 24 DIO are supported by DAQ Hardware.
It is not possible to use hardware timing on port1 or port2 outputs, they are not bufferd. Please expand
the buffered outputs also to port1 and port2 in M-Series DAQ Hardware to get 24 correlated DIO.
IMAQdx Timeout is a non settable property. It is fixed on 5s
When I do not trigger my camera for 5 seconds (and that is what I want regularly after aquisitions at 100FPS) , I get a timeout error.
Just ran into a situation where I need to stream a lot of data to TDMS. The only problem is that I need to store additional metadata with the channels. I could go through all of the generated TDMS files and insert them after the fact, but this is kind of tedius. I propose a way to add metadata to the channel. My first thought was to use a variant input on the Create DAQmx Channel, but some of the polymorphics already have really fully connector panes. So I am now thinking to just add a property to the Channel Property Node that is just a variant. When logging to TMDS, the variant attributes can be put in the metadata of the channel. Do something similar for the group so that we can have additional group metadata.
Metadata that I'm currently thinking about could include sensor serial number and calibration data. I'm sure there is plenty of other information we would like to store with the TDMS file.
At the new client.. no shock to many of you I "Get around"
I explained to some of my new compadres the DAQmx "Tasks" need to be created once.... Preferably during development!
I even created a new task in MAX using the DAQmx wizard, Dragged it to the LabVIEW project explorer and all of that!
I even went so far as to name the "AUX" temperature channel "armpit"- Trust me, after 5 minutes delivering a .lvproj based on the "Contineous measuement and logging (DAQmx) project template" it was impressive to the client that the plot "armpit" showed 37C on the chart. Guess where the thermocouple was.
So, Because I am that amazing, I showed them that they could Drag-n-Drop the Task to MAX and use MAX to monitor my armpit temperature. I even showed them that MAX could show them the wiring diagram!
"HOLD IT"! they said, The wiring diagram is right there! On SCREEN! per channel!
That is where I just about lost my mind! They wanted to see this connection diagram for another Channel--- that worked! BUT there was no way to output that wonderful data!
"Can I create a Wiring Diagram for this channel, device or task?" were the next words out of their mouths. I WAS STUNNED! "Not today" I said, "I'll post that excellent idea!"
It gets a bit annoying that PXI1Slot2 is listed after PXI1Slot14 when doing an ascii sort. I (ok, admittedly, my coworker) proposes having naming conventions that will allow for a better ascii sort. For instance, PXI1Slot002 PXI1Slot014.
I try to read data from SPS (SAIA SPS) with cgi technik. I have to read the values of 933 variables at the time.
I wrote a labview programm that you can find in Attachement. My problem ist, i can only read
170 values of the 933 values, i want, at the time.
Questions: Ist there a maximal numbers of variable to read or to write with Data socket at the time?
Thank you for your contribution
1. on page 2-4 of the manual (http://www.ni.com/pdf/manuals/375865a.pdf):
here is a sketch or a picture helpful to understand the text.
It is easier to understand page 2-4 with a small picture how to connect the AI SENSE for exapmle.
2. a terminal diagramm in the manual for each card (PXI, PCI) is also very helpful.
Alternatively a paper with the terminal diagramm to send with the devices.