07-26-2012 12:29 PM
I have an issue with using the change detection event for digital IO. I have 4 lines that I need to monitor that change very slowly - basically user inputs to control the software. The code works fine when in development mode there are no errors. WHen I build the application and run the EXE it gets the first change event and then every event after that the read returns -200279. It is very repeatable - always fails in EXE always works in development. I am just switching between the two so it is all the same hardware. I currently use DAQmx Read 1Channel1Sample and return as a boolean array. If I change the read to 1ChannelNsamples then it seems to work, but ALWAYS just returns one sample in the array.
07-26-2012 01:02 PM
Here is a screen shot from the exe. Notice how available samples is only 1 but I get an error. I have other DAQ tasks running, could they somehow be interfering?
First change successful. Only 1 available sample and 1 total:
Second signal not - it is detected but available samples is 0 but the total has incremented:
My digital lines are Port0\0:4
They start a linear encoder daq task on Counter 0. Could that somehow be interfering? Does the Linear Encoder option allocate other resources that are conflicting? I guess I still do not understand why it works in development and not in built.....
07-26-2012 06:51 PM
Strange. Dunno about diffs between Executable and Dev environment. But certain things about the front panel seem kinda like what you'd see if the buffer size were set == 1.
I'd be leery about calling DAQmx Read 1Channel 1Sample for a buffered task. Anytime I have buffered data acq task, I always read "N Samples", even if I set N==1. I don't *know* if the former would be a problem, but it's a point of suspicion.
You haven't identified your data acq hardware. If it's an M-series (or probably also an X-series board), I would not expect other DAQ tasks to interfere with the change-detection capabilities of port 0. I know less about cDAQ chassis' and modules, but I think things get dicier there regarding the total # of clocks available to be shared and stuff.
When you say "THEY" start a linear encoder task -- who are THEY? Do they start it both for the Dev and the Exe? Can you have them not start the task to help you diagnose things?
07-27-2012 12:40 AM
It is an X-series board. When I say "they" I mean the IO lines. Depending on their state they turn on/off the measurment of a linear encoder position. By default the buffer size is 1000 (continuous samples number =1000 help says this dictates the buffer size).
It gets to the event case indicating there was a "change" detected. Not sure why it shows "0" for available. Maybe I will try a new board.
07-27-2012 06:09 AM
Can you post code or a snippet? It sounds like you have a hardware-based change-detection task going on, but you also refer to an event case. Did you register the event structure for a DAQmx change detect event?
Hardware change detection is a cool feature, but there are some gotchas to watch out for. [long story deleted] I know that when bits change at almost but not quite exactly the same instant, it's tough to predict whether the board will buffer it up as 1 change or 2. And if buffered as 1 change, I *think* it's possible but not guaranteed that the pattern in the buffer will be the intermediate state not the final one.
Do you have physical hardware buttons for a user to operate? Could there be bounce/debounce issues to worry about?
P.S. Not clear how the "IO lines" are used to decide whether to turn on/off the encoder measurement. Are you referring to other digital IO lines? Do you have another DI task? Are the DI lines part of the change detection port?
07-31-2012 10:10 AM
Here is a snippet. I made a simple test using parts of a larger program.
The top task outputs a pulse that triggers a camera.
The middle section of code is a counter task that reads an encoder position
The bottom code is the IO event change.
This simple piece of code works both in development and in EXE. My larger program still only works in development. I get buffer overflows for the encoder read and the DIO read in the event case when in EXE mode. Since it always works in development mode I have no idea how to debug this, especially when I pull out the pertinent pieces and it still works in EXE mode. Maybe I will open a service req.
08-01-2012 09:18 AM
Sometimes the applications behave different between the development environment and an executable. I would recommend you to follow the debugging technique for executables given in KB Remotely Debugging Executables in LabVIEW and use highlighted execution or probes to check where the error comes from.
Have a good day!
08-09-2012 09:48 AM
Any progress? Just took another quick look at your screenshot and there's only 1 thing that stands out. It looks like you might have a greedy loop in the middle where you service your command queue. Most of the time, you'll poll for status, see nothing, and then execute the "idle" state. If you've got a cpu-releasing delay in there, never mind. But if not, you'll be right back to polling again immediately, and so on.
I would call the Dequeue function instead, and pay attention to the timeout and error outputs to help validate the command. Your app might even allow you to use an infinite timeout (value == -1), which makes things even simpler. Then it won't return until you get either a valid command or a queue error (such as a destroyed queue).