I am seeing a major difference in the update rate between two methods of updating the values from an OPC server via datasockets. The first method, subscribing to a value, is relatively fast, but appears to be very difficult to program with (for multiple channels). The second method, reading a datasocket, is much slower, but easier to program with (FOR loops for multiple sockets and such).
I ahve attached a VI to show what I mean. I am seeing a major difference in the time for the two frames to operate. Frame A completes in an average of ~ 0 ms, and frame B often takes much more, often on the order of 50 - 100ms, and in extreme cases closer to 600ms.
What is the difference? What am I doing wrong? And most importantly,
I want to write a routine to flexibly update many (~100) different datasockets, how can I do this correctly and efficiently?
For the record, I am using LabVIEW PDS 7.0 on Windows 2000.
I also have recently installed the Datalogging and Supervisory Control Module, but am not yet totally familiar with it.
The hardware is a Beckhoff system over Profibus. The OPC server is Beckhoff TwinCAT v2.9.0 (Build 940)