LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

LabVIEW

Hello, 

My application concerns with measuring data from the E5061A vector network analyzer (Agilent Inc.).

Specifically, my command is to access the network analyzer's buffer and to extract its data to my PC.

I implemented this command using the TCP_read VI.

My PC is working with Windows XP operation system. My LabView software version is 7.1

When I measured the time it takes for this command to complete, I got ~500 msec.

 

Does anyone know why I got so long time?

Is it typical to the system I am evaluating on?

Any advices or directions of how to reduce this time to <10 msec, will be helpful.

Thanks,

Abe

0 Kudos
Message 1 of 9
(2,844 Views)

Just a guess, but how long a device need to process/reply to a command is out of your control. Maybe you find something in the device manual, any specification about that command or some settings to reduce data amount. I doubt it has something to do with tcp communication or labview.


- Thomas -
0 Kudos
Message 2 of 9
(2,835 Views)

Abe,

 

Your experience is one of the reasons folks use data acquisition boards which plug directly into the computer.  Using TCP for data acquisition involves all sorts of uncertainities which could easily lead to more delay than you saw.  However, 500ms is pretty extreme, so you should be able to reduce it quite a bit.  Please post your code so we can take a look.  Without knowing what you are doing, it will be hard to suggest anything.  I would guess you are initializing the instrument each time, and you don't need to do that, but this would be a guess.

0 Kudos
Message 3 of 9
(2,832 Views)

It may also be due to the fact that the signal analyzer in question has a VERY large buffer. The method of the transfer can make a huge difference with this instrument as well. You can see that on page 19 of the Agilent E5061A Data Sheet.

 

If transferring 401 data points via lan in ASCII mode, it will take 510ms. If done in binary mode, it will take 4ms.

 

You might look into your transfer mode and see if you can change the format.

 

Rob

0 Kudos
Message 4 of 9
(2,822 Views)

Regarding your question, I opened the TCP connection only once and closed it when the main code is complete.

 

You will see from the code itself (check_buffer_timing.vi) that the time measuring clocks are positioned before and after the TCP WRITE command

 

itself. The code is attached.

 

My wish is to be able to perform this task with time which is < 80ms.

 

(I wrote 1ms by mistake...)

 

Thanks,

Abe

 

 

0 Kudos
Message 5 of 9
(2,815 Views)

Hi Rob,

 

This sounds interesting.

I'l check this point over..

Thanks,

Abe

0 Kudos
Message 6 of 9
(2,813 Views)

I tried to read the network analyzer's output data using the BINARY format, with similar conidtions given the device's specifications.

Unfortunately, the time measurements stays the same ~500ms.

 

Maybe I should try to re-configure my PC operating system in some manner to acheive the result of < 80 ms?

 

Is it possible that the LabView contributes to this long time somehow?

 

Abe

0 Kudos
Message 7 of 9
(2,791 Views)

I tried to add an artificial time delay between the READ BUFFER command and the WRITE BUFFER command.

 

When the time delay was ~400ms, the reading of the network analyzer buffer timing was reduced to <10ms.

 

The network device specifications state that its TCP data sending time for the same data magnitude of mine using binary 32 bit format is 4ms.

 

So I don't get why I need to add an additional 400ms time delay in order to get this data to my LabView application.

 

Do I need to configure the operation system in a special manner to be dedicated to the LabView software?

 

Abe

0 Kudos
Message 8 of 9
(2,779 Views)

The only reason for a delay like that would be to give the instrument time to respond. There is no need from the LabVIEW side for a delay like that. The system that I am working on now reads 12 instruments 5 times a second. Granted, I'm not getting as much data from each instrument as you are, but there is no delay built into the system between writing the data request and reading the data.

 

You might want to check with Agilent to see if they have any idea why the delay is necessary.

 

Rob

0 Kudos
Message 9 of 9
(2,759 Views)