Data Acquisition Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

Hi,

 

I was suggested to post the issue with nidaqmx not supporting the upcoming version 3.9 of Python in this group. Here is my original post: https://forums.ni.com/t5/Multifunction-DAQ/Deprecation-warning-for-nidaqmx-using-Python/m-p/4051990/highlight/true#M99324

 

In short this is the warning currently seen:

DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working

 

... it doesn't seem like an issue that would require a large effort to solve, so I hope this will be prioritized accordingly.

 

BR Jesper

 

 

I've been in many threads and seen many, many more where the root issue stems from confusion about the way DAQmx Timing and DAQmx Read interpret the meaning of "# samples" very differently for Finite Sampling vs. Continuous Sampling mode.   (For example, here's just one of the times I tried to address that confusion.)

 

First, here's what causes the confusion:

 

  • The 'samples per channel' input to DAQmx Timing is *crucial* for Finite Sampling tasks and usually *ignored* for Continuous Sampling tasks.
  • The 'number of samples per channel' input to DAQmx Read has a default value of -1 when left unwired.  However, the *meaning* of this default value is *VERY* different,  resulting in very different behavior depending on whether the the task is configured for Finite or Continuous sampling.  (See the first link I referenced.)

While the relevant info is findable in the help, it also often clearly remains unfound.  I got to wondering whether some changes in the DAQmx API could help.

 

I'll describe one approach, but would definitely be open to better solutions.  The goal is simply to find *some* way to reduce the likelihood of confusion for rookie DAQmx users.

 

I picture adding more polymorphic instances to both the DAQmx Timing and DAQmx Read vi's, so there can be distinct instances for FInite vs Continuous sampling.

 

Further, I picture that the task refnum would carry sufficient type info related to this timing config, such that that downstream DAQmx functions can "know" what kind of Timing was set up -- Finite, Continuous, on-demand (the default if DAQmx Timing was never called at all), etc.

 

Then when that task refnum is wired into DAQmx Read, the most appropriate instance of DAQmx Read would show up.  And the corresponding input parameter names, help, default values, and default value *behavior* can all be tailored to that particular instance of DAQmx Read.  For example, perhaps the "# samples" input should become a *required* input for Continuous Sampling tasks, to force a decision and encourage further inspection of the advanced help.

 

Don't know how feasible something like this is, but it's definitely something that regularly trips up newcomers to DAQmx.

 

 

-Kevin P

On our 845X user manual (http://www.ni.com/pdf/manuals/371746e.pdf ), most of the the minimum timeout can be set to 1000 ms (1 sec). We hope that this timeout can be set to even smaller, less than 1 sec.

Or perhaps there is a workaround to further reduce timeout on 8451?

Howdy!

 

I am trying to use a data acquisition system using python. There are thermocouple  modules and voltage modules that I would like to read from. It is set up and ran in 2013 LabVIEW and I am trying to put the test system into python for easy changing of the test and user control. 

 

I am wondering if the NI-DAQmx python library is kept up-to-date and if this is possible. I have been doing a lot of nitty gritty reading through the documentation on this library because there are not many examples of data collection using python to talk to the NI sensors. After trial and error attempts I have gone back to the basics to try and see if I can even change settings in the configuration of thermocouple channels. All I am trying to do is to take measurements from the thermocouple and I am changing the units from Fahrenheit to Celsius in separate runs. I can't even get this to work, even looking at example of this from Stackoverflow and the documentation where it specifically says how to configure the thermocouple channel (https://nidaqmx-python.readthedocs.io/en/latest/ai_channel.html and Ctrl-F to find thermocouples). 

 

Here is a snippet of the code I'm writing:

try:
    with ni.Task() as task:
        
        #Add the thermocouple channels to read from NI-9214's
        task.ai_channels.add_ai_thrmcpl_chan("cDAQ1Mod1/ai0:11", name_to_assign_to_channel='',
                                             min_val=0.0, max_val=100.0, units=ni.constants.TemperatureUnits.DEG_F, 
                                             thermocouple_type=ni.constants.ThermocoupleType.T)
        task.ai_channels.add_ai_thrmcpl_chan("cDAQ1Mod2/ai0:7", name_to_assign_to_channel='',
                                             min_val=0.0, max_val=100.0, units=ni.constants.TemperatureUnits.DEG_F,
                                             thermocouple_type=ni.constants.ThermocoupleType.T)
        
        #Set the thermocouple type to T
        #task.ai_thrmcpl_type = ThermocoupleType.T
        
        #Add the voltage channels to read from NI 9209
        task.ai_channels.add_ai_voltage_chan("cDAQ1Mod3/ai0:7")
        task.ai_channels.add_ai_voltage_chan("cDAQ1Mod3/ai9:12")
        task.ai_channels.add_ai_voltage_chan("cDAQ1Mod3/ai19:27")
        task.ai_channels.add_ai_voltage_chan("cDAQ1Mod3/ai29:31")
        
        #Set the rate of the Sample Clock and samples to aquire
        task.timing.cfg_samp_clk_timing(rate=Hz, sample_mode=AcquisitionType.CONTINUOUS)
        
        #Set the ADC Timing Mode to speed up the collection
        #task.ai_channels.ai_adc_timing_mode =  ADCTimingMode.HIGH_SPEED 
        task.ai_channels.ai_adc_timing_mode = ADCTimingMode.AUTOMATIC

 

This is a little frustrating to filter through the problems because there is not much out there that has the python use for help. If you search in the documentation search, the links of the results that you can click on are broken so that is another pitfall to this method.

 

Any help is greatly appreciated but I am curious if NI will keep the python library updated and running for future use.

 

NI-DAQmx python documentation: https://nidaqmx-python.readthedocs.io/en/latest/index.html 

Stackoverflow example help: https://stackoverflow.com/questions/47479913/python-nidaqmx-to-read-k-thermocouple-value

Thermocouple example: https://www.youtube.com/watch?v=NMMRbPvkzFs

As documented in a previous post, it is currently impossible to install the nidaqmx-python library into a Docker container. Enabling this functionality would create an opportunity for software teams to build cutting edge big data pipelines for measurement instruments using container technology. This could also optimize development time by including custom DAQ code in continuous integration pipelines.

Not sure if this should be in another category, but here is the idea.

 

In DAQmx you  can register for events like Change Detection Event, Sample Complete Event, etc. These events occur in the Event structure.

 

Idea: Extend this capability to NI-Sync.

For example allow a future time event to occur in an event structure, instead of firing a TTL pulse. The steps would be:

1. Create a new NI-Sync instrument driver session;
2. Read the current time of the clock;
3. Define when the event will be generated by introducing a delay to the current 1588 time;
4. Program when the event will occur at the specified time;
5. Register for Event
6. Event Occurs - do something.
7. Clean Up

See below for diagram.

mcduff

FutTimeEvent2.jpg

Counter tasks can only take 1 channel, due to the nature of timed signals, obviously. When setting up a system with 16 DUTs with counter outputs, this requires 16 tasks. Every single one has to be painstakingly created and configured. (As an aside: Defining a tabulator sequence still seems a mystery to NI's programmers, even though LabVIEW supports this)

Wouldn't it be nice to have a Ctrl+c,Ctrl+v sequence for tasks and then only modify physical channel? IMHO: yes.

 

KR

nimic

AutoStart Property.PNG

Currently AutoStart is a feature of XNET that is not intuitively obvious.  One of my co-workers was trying to test out the idea of having several queued sessions start with slight delays between them (using the Start Time Offset CAN Frame property).  The code wasn't working properly due to preloading the array of frames to transmit using XNET Write (which caused some frames to transmit early).  Note that a NI Applications Engineer on a support request was not able to figure out that AutoStart was the issue before the co-worker noticed the AutoStart property a few hours later.

Note that no documentation on XNET Write mentions the AutoStart Feature.  The XNET Start VI context help doesn't mention it, but the documentation does indirectly mention being optional and points you to the AutoStart property's help for additional details.

I would recommend an optional input to XNET Write (defaulting to true) to activate AutoStart.  This would match the DAQmx API's Write VI.  It also would be a good idea to note that input sessions always start automatically somewhere in the XNET Read documentation.

When a DI change detection task runs, the first sample shows the DI state *after* the first detected change.  There's not a clear way to know what the DI state was just *before* the first detected change, i.e. it's *initial* state.

 

This idea has some overlap with one found here, but this one isn't restricted to usage via DAQmx Events and an Event Structure.  Forum discussions that prompted this suggestion can be seen here and here.

 

The proposal is to provide an addition to the API such that an app programmer can determine both initial state just before the first detected change and final state resulting from each detected change.  The present API provides only the latter.

 

Full state knowledge before and after each change can be used to identify the changed lines.  (Similarly, initial state and change knowledge could be used to identify post-change states.)

 

My preferred approach in the linked discussions is to expose the initial state through a queryable property node.  The original poster preferred to have a distinct task type in which initial state would be the first returned sample.  A couple good workarounds were proposed in those threads by a contributor from NI, but I continue to think direct API support would be appropriate.

 

 

-Kevin P

When creating a vi and wiring the terminals, there should be an option that allows for an automatically generated help file.  Seeing how the vi knows the names and types of inputs, it should be able to generate at least a rudimentary template for the vi Help.  Next, allow the user to fill in the details plus the required/optional connections.  A few simple steps and now we have a functioning help screen with each vi. 

I'd like to know if a DAQmx Task has been triggered.  Perhaps via task property?

 

triggered.png

 

Thanks,

 

Steve K

It would be a Godsend if the PDF documents for all NI products, include the model number or part number.

 

Naming the PDF's the pdf document number I'm sure makes sense internally to NI. But these documents are for us customers to use - thus having a PDF I download have some obscure number - that does not relate to the product is irrating.

 

I'm constantly renaming the PDF to include the device model.

 

I know it's common sense...so rare it should be considered a Super Power.

The niRFSA Fetch IQ VI provides me access to the absolute time at which the first sample of the IQ recording was required, as well as the IQ samples.

I have two requirements for the data types involved:

  1. I need to access the absolute timestamp t0 using LabView's 128-bit fixed-point timestamp format, because a 64-bit floating point format simply does not have enough mantissa bits to uniquely identify each sample-clock edge accurately across the potential lifetime of the application, and because using floating-point numbers for timestamps is generally a pretty bad idea (as their absolute resolution continuously decreases as time increases, causing difficult to test problems late in product life).
  2. I also need the IQ samples in unscaled I16 format, which I assume is the most compact, native format that my NI PXIe-5622 down-converter records internally. (I want to do the scaling myself much later, in off-line processing)

Unfortunately, at present, I can only have one or the other (high-res 128-bit timestamps or native, unscaled 16-bit integer samples), but not both simultaneously. This is because the polymorphic niRFSA Fetch IQ VI offers me either unscaled I16 IQ data along with a wfm info cluster that contains the absolute timestamp in the inappropriate DBL format, or it offers me complex WDT records with nice 128-bit timestamps, but then the IQ data comes in inappropriate scaled complex single or double format, which are not the compact native unscaled integer data format I would prefer, and which is tedious to scale back into I16 (leading to an unnecessary I16->float->I16 conversion roundtrip).

 

Feature request: Could you please provide in the next RFSA release a variant of the unscaled I16 niRFSA Fetch IQ VIs that outputs the absolute timestamp in a fixed-point type (either scaled in LabView's128-bit timestamp type, or unscaled as a simple sample-clock integer count)?

 

Application: I'm acquiring IQ data in multiple frequency bands, and I need to know exactly (with <1 sample accuracy) the relative timing between these acquisitions. As my NI PXIe-5667 acquires IQ values with 75 megasamples per second, the required timestamp resolution is at least 13.3 nanoseconds, or 0.0000000133 seconds. But as explained here, absolute timestamps in DBL have only 5 decimal digits resolution left. Therefore I can only determine the relative timing between multiple recordings with an accuracy of slightly better than a millisecond.

 

Generally, I would recommend that event timestamps should always be provided in APIs in fixed-point timestamp format, to guarantee uniform resolution. They can always easily be converted into floating-point representation later. Floating-point timestamps are a pretty dangerous engineering practice and should be discouraged by APIs.

Idea:

We've come across a few use cases where it would be nice to pull samples from the DAQ buffer based on position in the buffer instead of sample number. This gets a little hard to describe, but a NI applications engineer referred to it as absolute addressing without regard to FIFO order.

 

In its simplest form we could use a read operation that just pulls from the beginning of the buffer as it is (probably?) in memory, maybe using a RelativeTo option of "Start of Buffer", with no offset.

 

The thought is that sometimes a properly set up buffer can contain exactly the data we need, so it'd be nice and clean to just get a snapshot of the buffer.

 

Use cases:

Our use cases involve continuously and cyclically sending AO and sampling AI in tasks that share a time base, ensuring that every n samples will be the beginning of a new cycle. A buffer of that same size n will therefore be circularly updated like a waveform chart in sweep mode.

 

In other words, the sample at the first position in the buffer will always be the beginning of the cycle, no matter how many times the sampling has updated the buffer.

 

If we could take a snapshot of the buffer at any moment, we'd have the latest readings made at every point in the cycle.

 

Alternatives:

The idea is that the buffer at all times has exactly the samples we need in the form we need them. What's lacking in existing functionality?

 

With RelativeTo First Sample, we don't know exactly what samples are there at any moment. We can query Total Samples and do math to figure out what samples the buffer contained, but while we're doing math sampling continues, leading to the chance that our calculation will be stale by the time we finish and we'll get a read error.

 

RelativeTo Most Recent Sample can return an entire cycle worth of samples, but they'll probably be out of phase. The sample beginning the cycle is likely to be somewhere in the middle of the FIFO order.

 

RelativeTo Read Position requires that we constantly read the buffer, which is a hassle if we only want to observe a cycle occasionally. It kind of means we'd be duplicating the buffer, too.

 

Best alternative:

In talking with engineers and on the forums, it sounds like the best option for us is to use RelativeTo First Sample and Total Samples to calculate the sample number at the beginning of a sample, and then make sure the buffer is many cycles long to mostly guarantee that the sample will still be there by the time we request it.

 

Forum post: http://forums.ni.com/t5/forums/v3_1/forumtopicpage/board-id/250/page/1/thread-id/91133

NI support Reference# 2407745

Hi,

 

It will be great if VirtualBench have an option that we can choose 7 or 8 digits to be decoded for I2C address. I understand that we usually decode 7 digits for address and the eighth digit for read/write for I2C protocol but sometimes we need to decode 8 digits for I2C address and the eighth digit still for read/write to satisfy our work.

 

Thanks,

 

Brian

In order to TDD code, one has to be able to isolate dependencies.  A very handy trick to isolate external dependencies is to create an interface with the same methods and properties signatures as the external class, and then create an empty class which inherits from the external dependency and implements the interface.

 

For example, I'm trying to isolate the HardwareResourceBase class, because I'm attempting to test some code the uses the result of SystemConfiguration.FindHardware("").

 

However, I cannot isolate the HardwareResourceBase class by doing:

public interface IHardwareResourceBase
{
  String UserAlias{get;}
  String MacAddress{get;|
  /* Other methods and properties of HardwareResourceBase */
}

public class HardwareResourceImpl : HardwareResourceBase, IHardwareResourceBase
{}

because the constructor to HardwareResourceBase is hidden.  In order to isolate HardwareResourceBase, I now have to create a full wrapper class, which is a major pain.

 

Please do not fully seal classes by hiding the constructor.  If a method should not be overriden, use the "sealed" keyword.

From what I can tell, there is one master DaqException that is used for everything that goes wrong.  This makes checking for certain exceptions extremely difficult, as you are now limited to checking the error message for certain strings.  This is fragile.

 

Please provide typed exceptions in the .NET DAQmx library.  For example, if a task times out, it should throw a "TaskTimeoutException", which might be a child of DaqException.  Or if the timing setup is invalid, the library would throw a "InvalidTimingConfigurationException".

 

Yes, I realize it's a bit of busy work to create a bunch of typed exceptions, but it makes error handling for consumers much easier.

It is a frequent requirement to make measurements on production lines. Position on these is often tracked with Rotary Encoders https://en.wikipedia.org/wiki/Rotary_encoder . Many NI devices can accept the quadrature pulse train from such a device, and correctly produce a current position count. The information in the 2 phase pulse train allows the counter to correctly track foward and reverse motion.

 

What would be very useful would be a callback in NI-DaqMX that is called after every n pulses, ideally with a flag to indicate whether the counter is higher or lower than the previous value, i.e. the direction.

 

This has recently been discussed on the multifunction DAQ board here: http://forums.ni.com/t5/Multifunction-DAQ/quadrature-encoder-based-triggering/td-p/3125468 . So I am not alone in requesting something more programmer friendly than the workaround offered there.

 

 

Currently with a multislot chassis, the system will operate at the requested sampling rate even if that rate is above the maximum supported by the module.  In this case, the chassis will replicate the additional required data points from the previous sample, and will not return an error in NI MAX.  With a single slot chassis, this is not an option.  However, it would be helpful if this feature was also supported with the single slot chassis so that data could be replicated at a higher sample rate without returning an error message.

 

Relevant KnowledgeBase article: Why is My Slow Sampled C Series Module Able to Operate at a Higher Sampling Rate than the Specified Maximum Rate?

 

 

 

 

Hello.  I'm working on an app to interface with a couple of ANT devices (Garmin Vector, Garmin heartrate monitor).  I've seen a couple of posts on this topic but nobody has posted code.  I talked to Frank Lezu at an NI day in DC a month or so back and he recommended I post about it here.

 

Anyone else looking for ANT/ANT+ support?  I'd be happy to share my code when it's not in a ridiculously embarrassing state but for now see this post for a braindump of my progress.

 

Thanks,

-Jamie