You may be running into an “in-placeness” problem. “In-placeness” is the ability of LabVIEW to reuse internal buffers so that it does not need to allocate memory. This results in higher execution speed. LabVIEW improves this algorithm with each release, but this will sometimes cause performance problems with older programs in certain circumstances. One of these happened in the 6.0 to 6.1 shift - if you use tunnels instead of shift registers to pass data into a loop, the memory manager overhead could, in some cases, dramatically increase. How can you tell if you have this problem?
In LabVIEW 7.1, there is a native way to find out when LabVIEW allocates memory. Select Tools->Advanced->Show Buffer Allocations.... A window will pop up with a bunch of checkboxes for you to select what types of memory objects you want to highlight on your block diagram. Check them all to start. Open your block diagram if it is not already open. If your VI is compiled, you will see a black dot every location that LabVIEW is allocating memory. Hit the refresh button on the dialog to flash the dots for greater visibility. Now, as Steve Rodgers, our resident LabVIEW guru likes to say, play “hide-the-dots”. You can't get rid of all of them, but you can usually get rid of a lot of them. Use shift registers instead of tunnels to pipe data into/through loop boundaries. Tunnels may work if data is only going in - check the dots.
In LabVIEW 7.0, all serial port access was changed to VISA. The serial port VIs introduced in LV 3.0 were reimplemented in VISA for backwards compatibility. I would strongly second the above suggestion that, if possible, you rewrite your routines to use VISA directly instead of the old serial port methods. In most cases, you will get a superior solution.