PXI

cancel
Showing results for 
Search instead for 
Did you mean: 

Visa PCI Driver

Hi ,

 

i have a pci board using PLX's PCI9052, for which we had a driver developed in C++..

we usually used to communicate to this driver using a DLL that was also built in C++

 

thing is that when a single read or write transaction happens thru labview it hardly takes 3.7us to 4us delay between

each read or write

 

LabView ---- > C++DLL   ----> C++ Driver ---> Hardware (yields 3.7uS read to read time)

 

But if i use VISA driver development kit to generate driver the delay increases

 

LabView  ----> VISA Resource name ---> Hardware

 

though the latter path seems bit short then delay is as high as 12uS

 

i thought that VISA would be faster than C++ DLL ..

 

any ideas as how to use VISA driver to optain maximum speed ?

 

 

 

0 Kudos
Message 1 of 5
(3,765 Views)

Hi,

 

If you're accessing memory-mapped registers on your device (i.e., device registers that share the global system address space), consider using viMapAddress and viPeek/viPoke.  The viPeek/viPoke operations will access HW directly for a PCI device, bypassing the various layers of software in between.

 

The flow of your application would look something like this:

 

ViOpen(PCI device resource) --> viMapAddress(address range for registers you want to access) --> viPeek/viPoke(register offset) --> viUnmapAddress --> viClose

 

As mentioned previously, this IO access method will only optimize memory-mapped register accesses.  If you're performing PCI configuration cycles, it won't help.

 

Thanks,

Eric G.

National Instruments R&D

 

0 Kudos
Message 2 of 5
(3,759 Views)

Hi

 

I am using IO address range and not the memory address range,

in that  case what should i do....

 

any ways to improve access speeds for single reads and writes..

0 Kudos
Message 3 of 5
(3,749 Views)

Hmm, unfortunately I don't think there's much you can do to optimize IO space reads and writes using NI-VISA.  The function call stack for an IO port write using VISA looks something like this:

 

LabVIEW --> NI-VISA (user mode) --> NI-VISA (kernel mode) --> HW

 

It looks like NI-VISA is adding some extra overhead compared to using LabVIEW's direct IO access feature (possibly as a result of the user-->kernel transition, which can be a relatively time consuming operation).

 

On a slightly-related note: If you have the ability to modify this board's HW design, it would be best to use memory space instead of IO space.  IO space is a legacy resource, and some systems have a difficult time configuring PCI devices that request IO space instead of memory space.

 

Thanks,

Eric G.

National Instruments R&D

 

0 Kudos
Message 4 of 5
(3,739 Views)

Hi Eric

 

thanks for spending time

 

i will try changing my brd's I/O map to memory map,

let me see the performance

 

0 Kudos
Message 5 of 5
(3,729 Views)