LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Error: -200277 Invalid combination of position and offset in Daq mx

Solved!
Go to solution

Greetings everyone, I am using a NI PXIe 6361 DAQ in LabView 2013 (32 bits) to acquire samples from (currently) six sensors. The signal is somewhat noisy, but behaves correctly after software filtering by averaging, so I digged a bit and found the code shown by NI in this video:

 

https://www.youtube.com/watch?v=fkIYp1mqp_g

 

So far it has worked great, the loop takes basically nothing to run and gives me a nice and clean signal, however when VI first starts, I get the error code mentioned in the title, saying that I am referencing a non-existing sample, since it is prior to the first sample taken (sample 0). It has absolutely zero effect in the actual VI, since if I hit continue, it runs nicely but since this will be end-user oriented, I don't think they will like to see an error every start up so, has anyone encountered this problem? And if so, how did you solve it? Any ideas to at least skip the error message? So far I have tried:

 

-Inserting a read sample before the actual while loop, no luck

-Inserting a wait for task complete, gives me a timeout error

-Making it wait a bit at first run, no luck

 

EDIT: please ignore the for loop for now, it was used just to show if data was formatting correctly, and will be used for software filtering, but has no effect on the error.

 

Snipper.png

0 Kudos
Message 1 of 8
(2,619 Views)
  1. Why do you need the Offset/Relative-to property change? Just delete it.
  2. It does not make any sense to use TWO DAQmx Read functions! Use a single DAQmx Read function.
  3. Delete the FOR loop, and do NOT use any Wait function in the While loop, the DAQmx should do the timing.

edit: I do not understand the Stop function, and you have broken wires. I could not examine your snippet further, since there are missing subVIs. You should really use first a DAQmx Example VI, there are lots of them via the Help menu!

 

edit2: After fixing the errors you see by checking proper DAQmx examples, you should also consider to setup a Producer/Consumer design. You mention you need to do software filtering on the incoming data, and this should NOT be performed in the DAQ loop! If you use a proper Producer/Consumer DAQmx template, you acquire data in the Producer loop, and broadcast it via a Queue toward the Consumer loop. So you do anything else (filtering, file saveing, etc.) in the Consumer loop.

0 Kudos
Message 2 of 8
(2,598 Views)

 

I can't reference the YouTube video now, but am supposing that you really *do* want to use the "RelativeTo" and "Offset" properties to repeatedly read the most recent chunk of data.  Chunks may be discontinuous or have overlap, whatever, you just want to work with the freshest data.  Right so far?

 

I've done that kind of thing quite a bit.  I would often just brute-force it the following way:

- before entering the loop, set "RelativeTo" = "Most Recent Sample" and "Offset" = 0.  Then do a Read of the standard # samples, for you that's 2000.  Be sure to wire this # into the DAQmx Read call!  

- After that returns with 2000 samples (that I generally ignore), I set "Offset" = -2000 and enter the loop.  Now you know 2000 samples have made it into your buffer and you can start looking 2000 samples backwards into the past from here on out.

- inside the loop, simply keep calling DAQmx Read, again be sure to wire in # samples = 2000

 

One last note on your code snippet.  Your two calls to DAQmx Read may or may not return exactly the same set of samples as each other from one iteration to the next.  It's important for you to understand why.  Do you?   (I'll explain if you want, but want to give you a chance to figure it out on your own first.  That kind of knowledge tends to stick better.)

 

 

-Kevin P

Message 3 of 8
(2,586 Views)

Sup guys, solved it already, and here's how, tho I would appreciate some insights on the why. Anyway the thing is, I really need the offset function to grab the most recent chunk of data, as described in the video I attached. In a producer/consumer architecture as I understand it, I need to wait a full second (2k samples at 2kHz) to have a chunk of data, then I compress said data to a single number (for each sensor) and work with that (I might be wrong); since I also intend to save the filtered data at 2k samples per second, that means I would need 2k seconds to have a single 2k chunk of averaged data, which is not what I want. The 1 second delay was used so I could visually confirm if I was indeed getting the data I wanted, and seemingly I did but left it there just so I could easily check the dataflow; I asked to please ignore everything inside the for loop, might not have been clear so I just deleted the distractor from the updated code. Also I didn't realize the startup SubVi was not attached, I apologize, here's the SubVi so you can run it, it's basically the startup the DAQ assistant creates when asked for code.

 

Turns out that, when I run the error wire all the way to the end of the while loop (snippet 1) the code runs without errors right from the first start, I can use either one or two DAQ read functions, in series or parallel, and it runs fine,

 

Works.png

 

However, when I delete the error wire (snippet 2) it sends the error again. Anyone know why this happens? By the way Kevin, I tried your solution, setting offset to zero on first iteration, then to -2000 for any further ones, but it sends the same error.

 

Error.png

0 Kudos
Message 4 of 8
(2,553 Views)

A possible explanation for why the error appears to disappear when you connect the Error Line to the output of the DAQmx Read Function is that ... there is an Error present (and it is unclear where it is -- it could be anywhere along the Error Line) and you have the default Enable Automatic Error Handling Dialogs (Tools, Options, Block Diagram settings).  What this does is if a LabVIEW function detects that there's an Error within the VI but there is no Error Line wired to Error Out (which suggests that the user is handling the Error monitoring and notification), LabVIEW "kindly" notifies you of the Error.

 

So it wasn't that the second code (without the Error Line) mysteriously got an error, it is that the first code also has the Error, but because you said "Don't stop, I'll look at the Error Line and decide what to do", it didn't report it.  I'll bet that if you wire the Error Line in the first case to the Stop indicator and put an Error Out terminal outside the While loop, you'll see the same Error as reported in your second example.

 

Bob Schor

Message 5 of 8
(2,547 Views)

Thanks a lot for the hints Bob, I'm pretty curious about this one, so tried it right away, and here's my results:

 

-When I run it with a simple/general error handler outside the while loop, it reports no error

-When the error handler is inside the loop (after the read) it sends the error, but only gives the continue option

-When I run the While loop once, with a true constant to the stop condition, it reports the error

-When I run it on highlight execution mode, it does not report the error even once

 

This makes me think the error lies somewhere in the dataflow department, but I'm not versed enough in the error handling department to pinpoint exactly where or why this happens, any more ideas?

0 Kudos
Message 6 of 8
(2,534 Views)
Solution
Accepted by topic author Daikataro

I'll take a stab at your observations:


-When I run it with a simple/general error handler outside the while loop, it reports no error

-When the error handler is inside the loop (after the read) it sends the error, but only gives the continue option

-When I run the While loop once, with a true constant to the stop condition, it reports the error

-When I run it on highlight execution mode, it does not report the error even once

 


All probably relate to the key idea of the error that you can't retrieve samples that are 2000 sample periods in the past until the task has run for at least 2000 sample periods to capture them.

  

1. The error tunnel through the loop is only retaining the last error value.  Probably the loop runs multiple times where the error is generated but never seen.  Once enough time has passed to get 2000 samples in the data acq buffer, subsequent runs no longer produce an error.  If you stop the loop after that time, the most recent error value is indeed "no error".

 

2. Yep, you're seeing the error that happened on the 1st iteration.  The time it takes to respond to the dialog is probably long enough that you *only* see the error on the 1st iteration.

 

3. Yep, as described above.

 

4. Code executes slowly enough that the time from task start until you finally arrive at the first DAQmx Read is longer than 2000 sample periods.  So again, yep.

 

Couple more things:

- Bob's comments are super helpful and important to retain.  The auto error handling behavior is a real subtlety in LabVIEW, not at all intuitively obvious because almost any other output terminal can remain unwired without consequence.

 

- I threw together a quick, messy example of my prior advice to make a pre-loop call with Offset=0.  The following works fine without error on my end using a desktop X -series board similar to yours.  I ran 4 channels at 100 kHz and then again at 1 kHz without error.  The 1 kHz case took the expected couple seconds to run to completion and the first Offset=0 read did it's job.  You can save the pic (keep it as png format) then drag the file onto a blank LabVIEW diagram to get instant code.  Adjust device,channels,sample rate as needed and tell me what you get.

 

- Be sure to wire in the "# samples" input on the DAQmx Read call!  When I removed that wire in the example below, I saw your error too.  You need to wire in that 2000 number.

  

 

-Kevin P

 

 

read most recent.png

 

 

Message 7 of 8
(2,521 Views)

I think you just hit the nail on its head Kevin. I added a case structure with a small waitms function (300ms was the lowest I could get away with) BEFORE the read node and read, and it indeed gave it enough time so the error would not show up at all. Also in your code, I noticed you edit the node properties before the task starts, unlike the NI video that edits the node on the running loop, after the task was initialized, so most likely the video is outdated and doesn't reflect current LabView functionality. So syntethizing:

 

-There exists a small delay from when the task starts, to when I can actually read with an offset, unlike what the video shows

-Leaving automatic error handling on masks the error the first time it occurs, then the error ceases to exist on further iterations

-Letting the buffer fill, be it via no offset reading, or waiting, makes the error completely disappear

 

Your code was rather helpful, however it would mean adding more code to the existing one and, since actual datalogging only starts when user says so, and there is no possible way a user can start datalogging faster than the buffer is filled since they have to fire up the whole interface and react to it, I'll just let auto error handling dispose of the error for now. If I need it to run right off the bat, I'll certainly look into adding the read at first start.

0 Kudos
Message 8 of 8
(2,509 Views)