LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

drive FPGA inputs in ModelSim testbench

I have simulated (in ModelSim) all modules of my FPGA code that do not connect directly to the FPGA input pins, and the code works properly in simulation; however, it doesn't function properly in the actual hardware. I'm trying to add the acquisition block that acquires data at the FPGA inputs, but I can't figure out how to drive a DIO input in the testbench.

 

For example: I am acquiring 16 input signals. Without the application block, I created a U16 control with one bit for each signal and then drove the control using the NiFpga_Write function detailed in the NI-created testbench file. This worked fine.

 

Now I need to replace those controls with the actual FPGA inputs. I can't find anything on the forums or the NI co-simulation white papers about how to do this. Any ideas?

 

 

0 Kudos
Message 1 of 5
(2,774 Views)

Hi NickDngc,

 

I am not aware of any way to configure a simulation to use actual FPGA inputs, only simulated IO. Perhaps you can give us more information on the issues you are seeing. Also, which articles are you currently looking at and which hardware are you using?

 

David C
0 Kudos
Message 2 of 5
(2,747 Views)

What I mean is -- I want a way to drive DIO0. I can create a module of code, give it a control as an input, and then drive that control -- but that gives me all the DMA overhead and latency that would not occur in a real application, plus it doesn't allow me to model the ADC flow. I would like to simulate driving signals on DIO0-11 and AI0-AI4 without using controls. I need to simulate the code snippet shown in the attached capture.

 

I'm using a PXI-7842R, Connector 0, max sample rate (200kS/s). I used the third-party simulation white paper located here. The auto-generated testbench has comments that describe how to read and write to controls, indicators, and DMA FIFOs -- but no information on writing to simulated FPGA inputs.

 

I realize that I could replace the IOs in the code with a cluster that would allow me to write all inputs at once, but this isn't a good solution for a couple of reasons:

  1. The VHDL for clusters ends up as a record, which makes writing and maintaining the testbench more complicated (accessing bool types and record subarrays instead of just writing a nice std_logic_vector)

  2. How will i know that writing the cluster will simulate the appropriate delays for the A-D conversion chain?

 

My problem is occuring when I combine two separate parts of code. The first block (shown in attached) is an Acquire Loop that acquires data and writes it to two locations: the Host, and a validation state machine that verifies timing between different inputs. The second block is the validation state machine. The validation state machine works perfectly in simulation, but the real FPGA application isn't synching to the data frames correctly. I'm forced by LabVIEW mechanicsms to have the two applications in separate loops (SCTL for state machine, normal while loop with associated extra two clock delays for analog subcircuit). I'd like to check the system as a whole, so I need to be able to simulate signals appearing at the actual FPGA inputs and passing through the analog signal chain.

0 Kudos
Message 3 of 5
(2,729 Views)

Hi NickDngc,

 

Take a look at this example. I believe the attached test bench is generating an IO pattern directly on the IO nodes (it demonstrates controls and indicators as well). If I am not mistaken, this is similar to what you are trying to achieve. In particular see lines 400+ of the .vhd code. Essentially it just uses standard HDL programming for the IO. 

 

David C
0 Kudos
Message 4 of 5
(2,679 Views)

I saw from the attached testbench that you can drive the FPGA digital inputs directly, although they use the "dioXX" naming convention instead of the name assigned in the LabVIEW FPGA VI. That's good info.

 

However, I didn't see an example for the analog inputs. Since there is no unencrypted top-level port description for the NiFpgaSImulationModel instance, I'm assuming that "connector0_AI0_Convert" can be driven with a 16-bit unsigned signal or converted std_logic_vector. However, I still need to provide the Convert signal for each AI. How should this signal be driven -- pulse or level? What are the minimum and nominal pulsewidths? Should it be driven at the desired acquisition frequency (i.e., one pulse per input every 200us), or is there some other timing relationship? Can all the convert signals be driven simultanesouly, or do they need to be sequenced? If they need to be sequenced, what are the timing requirements?

0 Kudos
Message 5 of 5
(2,638 Views)