I am a mentor on a 2nd year FTC team, and am trying to help our student programmers resolve a puzzling issue in advance of super-regionals. This is for the 2014-15 FTC contest and we are trying to "find" the IR sensor in autonomous and then respond accordingly based on its location.
Our student programmers are working to refine this autonomous program and have experienced an odd situation. We don't know if there is a programming problem or a hardware issue; it seems to be programming.
We are trying to detect the center beacon in order to thus identify the location of the 120 cm goal and the kickstand. Note that all of the "drive paths" have already been figured out and operate quite consistently. The problem seems to be in linking the sensor and logic all together.
The basic logic is as-follows:
1. Drive to Position #1
2. Attempt to Detect IR beacon at Position #1
a. If TRUE, proceed to defined instructions
b. If FALSE, drive to Position #2
3. Attempt to Detect IR beacon at Position #2
a. If TRUE, proceed to defined instructions
b. If FALSE, proceed to defined instructions (which assume that tower is located in Position #3).
I must note that while I know programming in general, I do not know LabView. But the team programmers have become reasonably proficient, so my job is mostly helping them debug problems by discussing possible issues. The IR and sensor issue has been pernicious (which is why we are only now really pushing to figure it out).
Everything is programmed. Here is what happens:
SCENARIO I: Beacon "on" at Position #1: GOOD--Robot executes 1, 2, and 2a. Everything works fine.
SCENARIO II: Beacon "on" at Position #2: GOOD--Robot executes, 1, 2, 2b, 3, and 3a. Everything seems to work fine.
SCENARIO III: Beacon "off" entirely (or Beacon "ON" at Position #3): BAD: Robot executes 1, 2, 2b, 3, and 3a--working the same as SCENARIO II.
Note that we have tried SCENARIO III with both Beacon "ON" at Position #3 (which isn't intended to be part of the logic) AND with no beacon at all to insure that there is no accidental detection of the Position #3 beacon. So this source of a "false positive" signal has been eliminated from consideration.
We have done other manipulations of the logic, and simpler programs such that the programmers (and myself) are convinced that the Beacon and IR sensor work fine, and do not detect from other positions than the one being interrogated. Also, when we remove the logic of Position #1 and purely do a detection on Position #2, this works. What seems to be the problem is when we "nest" the logic--the second logical branch does not work.
Here is where it gets really strange (but perhaps this will make everything clear to somebody who knows this stuff): Our main programmer tried one more scenario--he disconnected the IR sensor entirely. When he did this, the program executed consistent with two false readings: 1, 2, 2b, 3, 3b--it worked as it should. So in theory (I think) this "forced" a FALSE reading both times.
So it seems (from my novice viewpoint) that somehow the IR sensor is triggering a TRUE in Step 2. I guess I could understand this a little bit if Step 1 had already generated and somehow "stored" a TRUE reading, but this is not the case.
I have two very perplexed 13 year old programmers, and I am myself probably even more perplexed than they are. We have been looking at this for awhile and trying to find answers in LabView documentation, but haven't found any good explanations of why this might be happening. We have also tried to find something in the forums, but haven't had success.
I greatly welcome any thoughts and observations. And I will apologize in advance if my questions are novice--the 13 year old programmers know much more about this than any of us adults.
Thank you in advance.
Without looking, here are a couple of thoughts.
- Based on your testing, it sounds like the sensor is behaving correctly, so it seems unlikely that the sensor is getting a false YES value. However, you should make completely sure of this. Do something like have the NXT beep based solely on the value of the sensor.
- The autonomous template is sort of bad. I would try replacing the Wait for Time VI in both of the communication loops with a Wait (ms) vi at Programming>>Time>>Wait. The Wait for Time VI is actually communicating with the motors, which can slow everything down and cause oddities. The reason why is a bit complicated, but the intention was that the motors wouldn't shut off while someone is waiting.
- Are you using a state machine? If not, re-writing the code as a state machine could save you a lot of time debugging in this case. Like any code, LabVIEW diagrams without any particular design can be difficult to understand.
I'd be happy to take a look at the code. You can email me a zipped up robot project at firstname.lastname@example.org, or attach here.
We are not using a state machine--I have just been reading up on that as a possible issue. Up until now, essentially everything has been done by the boys; my contribution has been helping them debug. So I don't think they are aware of a state variable (I just learned about it today).
But you noted a "wait"--I had read elsewhere about this as a good idea between steps. We have not (in general) employed "wait" between any steps, and it appears that this can introduce its own issue. It seems that one quick test may be to put a "wait" after moving to position #2, but before detecting IR (and then again wait after detection).
I will see about uploading the program--although my son is not one of the main programmers, his computer is the primary authoring computer. I think the two of us can figure out how to get this uploaded.
If you open the project you're working in, you should be able to find something that says "Build Specifications" at the bottom. If you right-click that, you have the option to go to New->Zip File
Choose that and save the zip file. This will save all of the files contained in your program into a zip file you can post here.
For a robot project, there is a button that says Share in the top right corner of the Robot Project Center that will automatically compress your project.
When I do that from the Robot Project the "SHARE" is in the top left corner (FTC uses an older version of LabView, I think?). When I do this, it compresses the files to a 26MB size. That seems quite large and i wasn't sure it would mail properly at that size. (I am guessing some things are being carried around that in the "project" that are unnecessary). I can upload, if it makes sense....
Sorry to be so hapless on this...
Hey, what you did worked, I got it. You're right about it being in the left corner, my mistake. FTC uses the most recent version.
Garrett--I will work with the boys this weekend and explore what you have noted above. I will also double-check with them to confirm absolutely that what we have experienced is completely consistent and repeatable, since the logic seems so puzzling.
Thank you again; I will update later.
We did a series of tests over the weekend. The following sets of data define what we experienced, based on various inputs.
TEST= test #
IR BEACON: (1=position#1--first logical text, 2=position #2, 2nd logical test, OFF=no beacon at all (implicitly the third position). Note that a beacon detected is a "TRUE"
IR SENSOR: ON if it is plugged in, OFF if it was unplugged, OFF/ON when plugged in AFTER position #1
RESULT = the experienced behavior (possibilities should be "TRUE", "FALSE/TRUE", or "FALSE/FALSE"
Table data is here, but I will summarize findings after the table...
TEST IR BEACON IR SENSOR RESULT
1 OFF ON TRUE
2 OFF OFF FALSE/FALSE
3 OFF ON FALSE
4 1 ON TRUE
5 2 OFF/ON FALSE/FALSE
6 2 OFF/ON FALSE/FALSE
7 2 ON TRUE
8 2 ON FALSE/FALSE
9 2 OFF TRUE
10 OFF ON TRUE
11 1 ON TRUE
12 1 OFF FALSE/FALSE
Note that 1-8 and 9-12 were variations--different programs but with minor tweaks--testing for time lags to see if they impacted anything. We also tested twice on a third program. All of them had in common the "nested logical". When we removed one of the logical loops, things worked fine. However, with the nested logic, we saw a very consistent logical response: If the program starts with the IR Sensor plugged in, we always get a TRUE response; if the program starts with the IR sensor unplugged, we always get a FALSE.
The sensor, based on other tests, clearly is working and clearly is detecting the IR Beacon. But there seems to be some "set value" for the case of the IR Sensor being plugged in (it always indicates "TRUE"). One more odd item, that may have nothing to do with this: our programmer coded two distinct noises for TRUE and FALSE (as an auditory indication to us). No matter what the robot actually did, only the "FALSE" tone ever played.
I think we are going to try and learn a state machine, but this is just really puzzling behavior. Any other thoughts?