05-09-2007 12:49 PM
05-09-2007 01:00 PM
Olaf,
Post the old and new code. I'll look at it and see if I see any potential problems.
05-11-2007 09:19 AM - edited 05-11-2007 09:19 AM
Here are two zip-files with the old and new version. You will see that there are two different functions that read data after the command has been sent. The reason is that, when the software was developed on a PC, the one function worked fine but when the code was installed on the fieldpoint it did not work and a second version has been written that worked on the cFP-2020. Now with LV 8.20 noe of the versions worked but the original version seems still to be the most stable. For the "new" package I have tried to write my own version which tries to do exactly what the manual of our device says but still does not work. Maybe the problem is related to this 1073676294 error that our former programmer tried to deal with?
@centerbolt wrote:
Olaf,
Post the old and new code. I'll look at it and see if I see any potential problems.
Message Edited by Olaf Stetzer on 05-11-2007 09:21 AM
05-14-2007 07:53 AM
Olaf,
There are only two things I can think of that would keep it from running on cFP. The first would be differences in versions of software between development machine and cFP target. I believe you have already checked this. The other is hardware timing. Since this instrument interface is so timing dependant, I suggest you make all your VISA reads and writes synchronous. Synchronous mode should get you better control of the timing. If you can get it to work using serial port on PC, it should work on cFP.
I would test first with PC serial port and make sure app is stable. Then I would target cPF. Remember that timing will change between targeted mode and fully deployed.
05-16-2007 11:20 PM
06-04-2007 11:26 AM
Hi again,
@tuba wrote:
What versions of NI-Serial and NI-VISA do you have working on your LV-RT 7.1.1 system? If they are not the same as you are using with LV 8.2, I would recommend the following on your LV 7.1.1 system:
1. Upgrade VISA to NI-VISA 4.1 -- test to make sure program still works
2. Upgrade NI-Serial to version 3.1 or 3.2 -- test to make sure program still works
These steps will allow you to isolate which component if influencing the behavior of your application. You should not upgrade serial first, as it may force you to upgrade VISA along with it, and we are trying to change only a single component at a time.
-tuba
06-06-2007 12:04 AM - edited 06-06-2007 12:04 AM
Message Edited by JasonS on 06-06-2007 12:05 AM
06-06-2007 08:05 AM
@JasonS wrote:
There have been quite significant changes to NI-Serial, NI-VISA, and LabVIEW Real-Time between LabVIEW RT 7.1 and 8.20. This provides a lot of opportunities for time-sensitive code like this to break, especially if the timing was originally determined through trial and error.
If I had to venture a guess, I would lean toward saying that the timing of the DTR toggling is the most likely cause of the issue. I say this because it is the timing of this line toggling that determines how many bytes will be sent. The actual receiving of the byte is handled by the driver in the background, so unless you are actually receiving buffer overrun errors, it is likely that the data is never being sent from the instrument.
Try something like I have pictured below. It performs a read after each toggle of the DTR line. While inefficient, it removes the necessity to guess at an appropriate delay between DTR toggles, because once the expected byte is received, the instrument should be ready for the next round.
06-06-2007 09:32 AM
Good news: This approach (with clearing the errors currently) was a major step forward! Thanks JasonS! For the first few read cycles, the data looks reasonable (the full data of 512 bytes plus status) but after a certain time it looks corrupted, basicly 0 0 0 0 0 which is problably just the content of the predefined byte array. I suspect that the whole software that runs on the fieldpoint is on the limit of what the fieldpoint can handle, so it might be partly an issue of optimizing and profiling the current code (which I don't know yet how to do but I will find out). Still, the cause of the above mentioned error would help to clarifiy some things.
@Olaf Stetzer wrote:
@JasonS wrote:
There have been quite significant changes to NI-Serial, NI-VISA, and LabVIEW Real-Time between LabVIEW RT 7.1 and 8.20. This provides a lot of opportunities for time-sensitive code like this to break, especially if the timing was originally determined through trial and error.
If I had to venture a guess, I would lean toward saying that the timing of the DTR toggling is the most likely cause of the issue. I say this because it is the timing of this line toggling that determines how many bytes will be sent. The actual receiving of the byte is handled by the driver in the background, so unless you are actually receiving buffer overrun errors, it is likely that the data is never being sent from the instrument.
Try something like I have pictured below. It performs a read after each toggle of the DTR line. While inefficient, it removes the necessity to guess at an appropriate delay between DTR toggles, because once the expected byte is received, the instrument should be ready for the next round.
Thanks for that suggestion. Good to know that you can at least confirm that the internal timing has changed between 7.1 and 8.20. I have tried to mimick your code and at least the behaviour has changed now. I get only one byte total and the following error message: -1073807298 (could not perform operation because of I/O error). If I clean the error after the read function I get the expected number of bytes but I still have to look if they make sense. Maybe I have to add a wait cycle beteween DTR toggle and read function to allow the device to respond? I will play around in that direction....
If someone could tell me what the couse of the error might me I would appreciate that!
Olaf
06-06-2007 10:55 AM