LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Reg. SPI on 7833R on a PXI

Hello all,

 

I am attempting to implement the SPI protocol (http://en.wikipedia.org/wiki/Serial_Peripheral_Interface_Bus) on a PXI and its FPGA card. I am unable to compile the FPGA top-level VI, though the 'intermediate files' generation proceeds without warnings or errors. Help is desperately needed and would be greatly appreciated. The details are as follows.

HARDWARE: PXI: chassis-1042, controller-8110, FPGA card - 7833R
SOFTWARE: LabVIEW 2009 SP1 and the corresponding modules for LabVIEW RT and LabVIEW FPGA

ATTACHMENTS:
1] spi_dual_port_example.zip  --- The original IP downloaded from http://www.ni.com/white-paper/9117/en
   It is an implementation of dual-port SPI for a SIngle-Board RIO
2] SPI_for 7833R.zip --- My attempt at modifying the sample code to suit the hardware configuration detailed above
3] SPI Port FPGA VI compilation errors.txt --- The text messages displayed in the "LabVIEW Compile Server 2009" window when I begin compiling the FPGA top-level VI to generate the bitfiles.


PROBLEMS AND QUESTIONS:

1]
The FPGA VI does not compile successfully. The 7 stages of generating intermediate files completes without showing any erros, and after that when it starts the Compile Server. Initially it seems to proceed smoothly but then it begins showing a LOT of errors and when it gets to 40 errors it says that compilation has failed. Some of the errors I get are shown below (the complete set of messages shown in the Compile Server window is in the file "SPI Port FPGA VI compilation errors.txt").

---------------------------------------
ERROR:HDLParsers:851 - "C:/NIFPGA~1/srvrTmp/LOCALH~1/SPI_TR~1/prim_CVT_Facade00000296.vhd" Line 100. Formal kInIncludesOverflow of entity with no default value must be associated with an actual value.
ERROR:HDLParsers:851 - "C:/NIFPGA~1/srvrTmp/LOCALH~1/SPI_TR~1/prim_CVT_Facade00000296.vhd" Line 100. Formal kOutSigned of entity with no default value must be associated with an actual value.

ERROR:Xflow - Program xst returned error code 6. Aborting flow execution...
---------------------------------------

Searching ni.com reveals that error code 6 is a LabVIEW error that is generated for any of several reasons, whereas HDLParser851 error has been reported to correspond to exactly one reason (which also happens to correspond to error code 6): "When an input port of a declared component is not connected, the following error message is reported".

But on looking through the FPGA VI and it's subVI, I didn't find any unconnected inputs, and besides it looks the same as the VIs for the sbRIO which I used as a starting point. Suggestions on how to investigate this error and identify its cause are welcome.

2]
The code for the sbRIO used a 10-line I/O port of the sbRIO for the variable CS (Chip Select) which was configured as an integer in the VIs, and thus 2 such sbRIO ports for the 2 SPI ports implemented, with the remaining 6 I/Os of both ports coming from a single port of the sbRIO. I didn't see any reason to use 10 lines for CS and configure it as an integer, since SPI is supposed to be a 4-wire protocol (and only 3 if you are going to connect only 1 device to each SPI port), so I used a single I/O for CS and configured it as a boolean in the VIs. Additionally, I saw no reason why it has to be on a separate port of the FPGA card, and used DIOs 0-7 on the MIO i.e. all DIOs for both SPI ports are chosen from the FPGA 7833R's port 0 on the MIO. Could these changes have anything to do with why the FPGA VI doesn't compile? If yes, why? And how do I solve the problem?

3]
I have doubts about passing I/O's by reference for the cloned subVIs. Could someone please take a look and make sure that I have done it correctly?

4]
Despite my attempts I have been unable to grasp how this set of 5 VIs (RT host top-level VI, its 2 subVIs, FPGA top-level VI, port subVI duplicated as necesary for multiple SPI ports) interacts to result in following the SPI protocol. Pointers on how one should go about understanding such code are most welcome.

Thanks in anticipation,

Regards,

 

 Satyakant K.

0 Kudos
Message 1 of 8
(3,761 Views)

Hi Satyakant,

 

The attached code compiles.  I only made one change, and that was to confirm that the IO types configured for the IO Control Cluster on your top level VI was the same as the IO Control Cluster in the SPI Engine subVI.  I made that control a type-def and set them to auto-update to ensure they would always remain sync'd.  I noticed that some of the properties and methods avaible for the CS line on the top level VI were not available on the subVI control for CS.

 

Also, I'm using (and only have access to) LabVIEW 2011.  I did a save for previous to 2009, but haven't been able to test compile in that version.  I did get warnings that IO Node was not available in previous versions, so you may have issues opening.

 

If you cannot successfully use this modified version, I would recommend you also change the IO Name Control clusters to Type Definitions and ensure that both the top level VI and the SPI Engine subVI use the same type def.

 

As an additional precaution, I would also recommend deleting the IO Name Control wire in each case of the SPI Engine subVI that uses the CS IO Node.  LabVIEW generates the code behind the scenes for these IO Nodes and I've seen (especially in older versions of LabVIEW FPGA) that deleting wires and reconnecting gives LabVIEW a chance to recompile these sections of code.

 

I didn't write this IP, but it appears that to add additional SPI ports, the developer intended for you to add another reentrant copy of the SPI Engine subVI with an incremented Port number assigned and new IO Node controls.  Then in the Host VI you can address this new engine as a new port automatically.  One downside of this code/method is that all ports share the same "communication pipe" to and from the host, so each port can only be addressed/configured one at a time.  In this case, the ports are almost equivalent in functionality to sharing multiple devices on a single SPI port and dedicating individual Chip Select lines to each device.

 

Hope this helps,

Spex
National Instruments

To the pessimist, the glass is half empty; to the optimist, the glass is half full; to the engineer, the glass is twice as big as it needs to be has a 2x safety factor...
0 Kudos
Message 2 of 8
(3,748 Views)

Hi Spex,

Long story short: You were right; thanks for the insightful help. I might need a lot more help.

The long story:
The error you pointed out was indeed the only error in the code. As you rightly suspected, I couldn't properly open the code you attached. When I tried to do so, it said some files were missing and asked me to point it to the appropriate directories; I chose to "Ignore" and the result was that ALL the FPGA 'IO Node' blocks were missing from the VIs. Rather than reinsert them, I looked to correct the configuration of IO Control Clusters in my original VIs. I didn't actually create a typedef for the IO Control Cluster; I dragged & dropped the cluster from the top-level VI where all the options were correctly set into the front panel of the SPI Engine subVI, since it seems to be needed in exactly that one location and no other (of course if I change anything later I'll need to remember to make changes in both places). Deleting old wires and making new connections became inevitable because LabVIEW saw the new clusters as new blocks to be dealt with afresh. After making all changes that followed from this (delete old cluster, re-map new cluster into the connector pane of the subVI, delete old wire branches in the toplevel VI, reconnect both clone subVIs), the code compiled without problems.

In hindsight, the HDLParser error 851 for unconnected components sounds consistent with the actual problem of different types of IO's being connected together. While making the above changes today, there was one sign of this problem which had escaped attention all along: a coercion dot at the connection of each clone subVI. It seems it got ignored by everyone who looked at the code because:

1] The subVI icon I made had a *RED* border, so the coercion dot wasn't visible until you focused on that and nothing else on the screen.

2] Coercion dots frequently occur for reasons like variables unexpectedly being 8/16/32 bit, or signed or unsigned, or floating point or integer. But in this case, the consequences were much more drastic.

Thanks for your pointers on the functioning of the code. I will keep them in mind when I sit down to understand it threadbare. I do see your point about the downside of this implementation. However I am hoping it doesn't become a bottleneck for my application. My RT Host VI which will pass data onto the SPI VI will most likely run at ~25kHz, whereas my microcontroller board DEMO9S12XEP100 (the device that will communicate via SPI with the FGPA card) has a maximum baud rate of 12.5 MHz; the FPGA card in between these two has a 40MHz clock and that is the speed chosen in the single-cycle timed loop in the SPI engine subVI. The SPI data transmission speed demanded by the application is unknown at present, partly because the volume of data (number of signals sampled) as well as sampling frequencies for signals will both be iterated upon as the project evolves. So, if my application finds SPI data transmission to be the bottleneck, then I might have to use clockspeeds higher than 25kHz for the RT host VI.

Before I get to that stage, my next steps for testing the SPI implementation are as follows.
1] Test using 1 byte of data at a time
2] Test using a sequence of bytes as input, obtained by sampling a continuous-time signal generated on the host toplevel VI;
3] Test using multiple signals. Generate and sample multiple (say 7) such continuous-time signals on the top-level VI, and make a VI to package them into one "frame" of data which has an instruction byte followed by the set of signal identifiers and signal values
4] Connect this complete code into the simulation VI whose signals need to be sent to the microcontroller board
5] Make the VI for decoding the "frame" of SPI data that the microcontroller board will send through this SPI bus.

Please let me know your take on these steps, if there are any flaws in it, if there is anything else I need to take into account, if there is any way I can improve on it, and if there is any pre-existing IP or examples that I might benefit from looking at.

Regards,

 Satyakant K.

0 Kudos
Message 3 of 8
(3,731 Views)

Hello all,

I'm having trouble using this SPI implementation. I can't get it to do anything at all --- no output back to the screen, no output even on the DIO (checked with a CRO). I'm mortified that I might discover it to be some very silly and elementary mistake on my part, or that I may even be using it this VI the wrong way.

The details are as follows.

HARDWARE: PXI: chassis-1042, controller-8110, FPGA card - 7833R
SOFTWARE: LabVIEW 2009 SP1 and the corresponding modules for LabVIEW RT and LabVIEW FPGA

ATTACHMENTS:
1] SPI_for 7833R.zip --- My attempt at modifying the sample code to suit the hardware configuration detailed above
2] Screenshot of the RT Host VI front panel with choices of made for the various configuration parameters and the controls

PROCEDURE FOLLOWED, PROBLEM SYMPTOMS SEEN:

First I 'ran' the FPGA toplevel VI to generate the bitfiles. I connected the SCB-68 to the MIO (Connector 0) of the 7833R, and the MOSI and MISO of my port 0 thus happen to be pins 37 and 38 respectively (DIO1 and DIO2) on the SCB-68. I connected these 2 pins together for a loopback, expecting that when I ran the RT Host toplevel VI the data sent on MOSI will return back through MISO and be visible on the RT Host front panel in the "Read SPI" indicator array. I also connected a digital oscilloscope probe to DIO2 (with ground being connected to the DGND on pin 4 of SCB-68); this CRO was tested to check that it is working using the 1kHz 3V signal on the CRO itself, and in the past I have used it for testing the FPGA examples available as part of the installation. Also, I set the Trigger menu to detect a rising edge and the trigger level at 2.52V.

Then I ran the RT Host toplevel VI. I saw no response of any sort. The VI looks like it is running, but actually nothing happens, i.e. the "Read SPI" array remains unchanged, and the CRO connected shows no activity on the MOSI and MISO lines. The front panels of the two subVIs of this toplevel VI, ReadWrite and Configure, both show the same error code for "error in" and "error out." The error "status" had the little thick green arrow thus indicating a warning with the "code" field shows the number "61010" with the "source" field displaying "Invoke Method: Wait for interrupt in SPI 2port (Host).vi"

This suggested to me that a doubt I had earlier (when HDLparsers error 851 was the problem) that the "IRQ" number input to the "Wait for interrupt" block in the first frame of the flat sequence structure in the toplevel FPGA VI needs to have something connected to it, and that something should be the number "0" since the RT Host toplevel VI has "Wait for IRQ" parameter "IRQ number" connected to the number 0. Acting on this hypothesis I connected a constant "0" to the "Wait for interrupt" block. I recompiled and ran the VI again, but there was no change. The warning message suggested a "Possible Reason" that "LabVIEW FPGA:  The Wait on IRQ method timed out before the specified interrupt was received." So I connected a "-1" in the RT Host toplevel VI, but still there was no change in the behaviour.

That is how far I could get trying to debug this on my own.

Intermittently, while I was trying out all the above, the RT Host Toplevel Vi would abort and  I started getting error "-61017" in the subVIs. The error message was "Open FPGA VI Reference in SPI 2port (Host).vi" with Possible reason being "LabVIEW FPGA:  You must recompile the VI for the selected target." So I ended up having to recompile the same unchanged FPGA Vi several times tonight just so I could try out a different combination of input parameters on the RT Host toplevel VI.

The attached zipfile has the last version of VIs I tried and the attached screenshot is of the 5 front panels that resulted from it. The number "-61017"  showed up in "error out" of the ReadWrite subVI even though I had freshly recompiled the FPGA VI.


Looking forward to your inputs,

Regards,

 Satyakant Kantarao


Download All
0 Kudos
Message 4 of 8
(3,706 Views)

Hi Satyakant,

 

I didn't appear to get a notification that you replied to this thread.  It is also possible that I missed the email notification because it came in during NIWeek, our global Graphical System Design user conference in Austin, TX, so I was out of the office.

 

Are you still having issues with this SPI core?  I think you have done a good job trying to simplify the issue and get to the root of your problem.  I think the next step is to try to simplify even more.  Are you able to create a very small/simple FPGA VI for your PXI-7833R and run it in interactive mode?  I would suggest creating a simple VI with a loop and a control that is wired to an IO node.  Then you can verify that your code is compiling and downloading to the correct target and the IO on the board is not damaged.

 

The next step would be to create a simple host application for the simple FPGA VI you just created.  Open a reference to the FPGA VI (ensuring you do not get an error with the LabVIEW Probe), and then write to the FPGA Front panel control a series of True and False with an interactive loop to ensure your host to FPGA communication is going smoothly.

 

Regarding some of the errors you have received so far, I am concerned that your SW is experiencing fundamental issues that I would like to rule out.  For example, the IRQ that is asserted is the very first thing that happens in the FPGA and host code, to ensure that the FPGA code is loaded and running before the host code attempts to send it any commands.  The Default value for the IRQ FPGA VI is "0", so I wouldn't have expected adding a "0" constant would change the behavior, but having the "0" explicitly coded is best practice.

 

Hope this helps.

 

Regards,

 

 

Spex
National Instruments

To the pessimist, the glass is half empty; to the optimist, the glass is half full; to the engineer, the glass is twice as big as it needs to be has a 2x safety factor...
0 Kudos
Message 5 of 8
(3,651 Views)

Hi Mr. Spex,

I'm glad you saw my posts and responded even if with some delay. I am indeed having issues with this core, but on a new track and of a different nature now. I am responding at length since anyone else who follows this thread and needs to use SPI on a 7833R might find this useful.

My FPGA card IOs seem to be fine since I have been able to run the examples found via Example Finder. I haven't tried your other debugging tips for reasons explained below.

In response to my repeated questions to NI Tech Support India (Bangalore), one  Application Engineer (whom I probably shouldn't name in the interest of preserving his online privacy) successfully implemented SPI on a 7833R and sent me the files. I was under the impression that he had debugged my code (attached in the previous post) and found its errors, but as it turned out he did what had been spelt out in the white paper at http://www.ni.com/white-paper/9117/en for changing targets with this SPI implementation, making the minimum modifications as prescribed. I tried it on my setup here, and it worked, i.e. I finally had SPI working accurately via loopback on the SCB-68.

But my original objection to this implementation still existed: it uses a PORT for the Chip Select, thus blocking more DIOs than necessary. I have only 1 SCB-68 and if I want to use AIs and AOs I must connect it to the MIO on the 7833R, thus having access to only 16 DIOS which is enough lines only for 1 SPI port. This implementation doesn't run if the FPGA toplevel VI has only 1 SPI subVI, it needs at least 2. I worked around this by assigning DIOs to other SPI ports on other connectors for which I had no SCB-68. Also, this implementation does not run at higher than 5MHz, whereas my other device can handle upto 12.5MHz. As you rightly pointed out, this implementation multitasks the data transmission from the RT Host VI to the FPGA SPi port subVIs, so connecting the 3 SPI ports of my microcontroller board to the PXI would not increase the speed of data transmission. It has been suggested that it might be possible to use the RT host toplevel VI as multiple subVIs inside the application VI and set each to a different port number, thus having parallel and independent data transmission; I am not clear if this can work because it seems to me that each such SPI subVI repeated on the RT Host application VI would have a corresponding FPGA toplevel VI; while the resources on the 7833R are plenty, I wonder if a .lvproj project file can have multiple FPGA toplevel VIs all working at the same time .(The white paper seems to hint that it is not possible.)

Nevertheless i decided to use what I had and go as far as 1 SPI port at 5MHz would take me, and revisit this issue whenever it becomes unavoidable. In the attached files, one can see that "Testing
SPI.vi" is the VI which uses "Example_Host SPI Dual Port.vi" as a subVI as well as converts input array data from type double to U8 and output array data from U8 to double. "Test 2.vi" is a testdriver to check that "Testing SPI.vi" works properly, and is found to do so. This can be verified by you too. One must ensure to run the SPI bus at 1MHz or higher (the option is in the front panel of "Example_Host SPI Dual Port.vi") else out of nowhere you get the warning 61060 with the message "Invoke Method: Wait for Interrupt in Example_Host SPI Dual Port.vi->Testing SPI.vi->Test2.vi."

MY CURRENT PROBLEM:
My application VI (the one from which I need to send some data out via SPI to an external microcontroller board and receive the processed results to use inside this VI) is for a proprietary software and thus is not included in this attachment. But you might be able to access it since I emailed it to support@ni.com with the subject line including "(Reference#1899627)" The problem is that when I use this SPI implementation inside the application VI, it slows it down by a factor of ~15. The application VI by itself runs in real time, in say 15 seconds; but alongwith the SPI Vi the same simulation takes ~230 seconds. This completely defeats the purpose of my hardware-in-the-loop platform. It looks decidedly odd that an Rt simulation that runs in real time at 1kHz, combined with an SPi bus that seems to work at 5MHz, gives a simulation that is 15 times slower. I am unable to figure out if this is an anomaly and something somewhere needs to be adjusted, or whether this is how it was designed to work and this SPI implementation is no good for RT simulations at 10kHz.

I eagerly look forward to your erudite inputs.


Regards,

 Satyakant Kantarao

0 Kudos
Message 6 of 8
(3,624 Views)

Hi Satyakant,

 

I reviewed your code, and your challenges are certainly tied to the architecture of your code more so than the performance of the SPI core.

 

In your HIL system, you are calling more functions than necessary each iteration of the loop.  You need to move the "Open FPGA Reference" and other supporting VIs out of the SPI function you are calling withing your loop.  Also, if you need this HIL simulation to run in Real-Time, I would recommend using a Timed Loop, rather than a while loop, to manage the timing of your HIL application.

 

I've attached a representative image of what the block diagram should look like.

 

Screen Shot 2012-08-27 at 9.35.43 AM.png

 

The architecture shown above is very similar to how you would use a DAQmx API, if you are familiar with other NI drivers.  Rarely does it make sense to open the reference and configure the HW you are going to talk to for each loop iteration.  Typically, you only want to open and configure the hardware once (or the minimum number of times possible) and then only interact with the hardware by sending and receiving data.  The "signal scaling and bit manipulation subVIs" in this image are intended to represent the math and bit manipulation you had on the block diagram of your code.

 

Hope this helps.

 

Cheers,

Spex
National Instruments

To the pessimist, the glass is half empty; to the optimist, the glass is half full; to the engineer, the glass is twice as big as it needs to be has a 2x safety factor...
0 Kudos
Message 7 of 8
(3,598 Views)

Hi Mr. Spex,

Thank you for your suggestions; they were certainly helpful. Once again, you were right.

Thanks a lot for demonstrating via the figure how I should re-do the architecture; after my terrible experience with trying to rework the SPI implementation (to use a Boolean Chip Select instead of an integer) I wouldn't have modified any code without such a strong injunction from an authoritative source. With your explanation, it sounds extremely logical to implement it the way you described. And no I am not familiar with the DAQmx APi or any other NI drivers; in fact I haven't seen or used any LabVIEW except what I needed to use for this SPI implementation.

The outcome of changing the architecture as recommended by you is that the HiL RT VI has become less slow than before, but is still not real-time. Specifically, where earlier the HiL RT VI ran ~15 times slower, with the new architecture it runs 5 times slower or at best 4 times slower. That's a huge imporvement, indicating that the direction is correct and suggesting that more changes may make it faster.

Intending to observe the working of the SPI protocol by viewing the lines' signals on a DSO (a foolhardy exercise if you don't know the exact bit sequence being transmitted, as I discovered), I noticed something interesting and possibly giving a clue about the cause of the HiL RT VI not being realtime. In my use of the SPI bus, Chip Select is not involved, and due to loopback MOSI=MISO, so there were only two signals to observe, SCLK and MOSI(=MISO). With the old architecture where the FPGA VI Reference was opened once every iteration, I found that SCLK and MOSI were both idle for long periods of time, and suddenly both would show activity and all the data was transmitted. The result was that at the appropriate Time/Div setting on the DSO SCLK as well as MOSI looked like spikes with a huge gao between two consecutive spikes, and on zooming into the spike you found that the spike was actually all data getting transmitted at the SPI clock frequency specified. This is illustrated in the figures shown in the PDF file uploaded here. The figures have titles for the subplots, but no captions or explanatory text, so I'm afraid you'll have to refer to this post with the file open alongside.

The first two figures are for the old architecture in the file Test2.vi (i.e. not the HiL RT VI but the testdriver built to examine if the SPI VI is working properly), with SPI bus speed set at 5MHz and 10 bytes being transmitted. The first of these is the zoomed-in view of the data tramsmission part, showing a 5MHz clock as expected, and MOSI varying in accordance; the entire data of 80 bits is transmitted in 16usec. The second figure shows the zoomed-out view, and each data transmission section now looks like a spike. What's more, these spikes are 14msec apart. Thus, this SPI VI implementation transmits data at 5MHz but sits idle for most of the time. I had tried changing the SPI speed to 1MHz and 50kHz and viewed the signals; I found that the frequency within the spike became 1MHz and 50kHz respectively i.e. data transmission took 80usec and 1.6msec respectively, but the gap between the spikes stayed constant at 14msec. Taken together with the fact the HiL RT VI was set to run at 1kHz i.e. with a 1msec period during which this SPI was supposed to transmit 80 bits once, but instead was observed to take ~15 times longer, it is either a bizarre coincidence or actually correlated to this 14msec gap between the spikes.

After I implemented the architecture you suggested in both the testdriver Test2.vi as well as the HiL RT VI ABS_Example.vi, I repeated the process of setting different  SPI speeds and capturing SCLK and MOSI on a DSO (I used a Yokogawa DL1640, if that is relevant). The figures I got are shown in the rest of the PDF file uploaded. The short story is that the gap between spikes has reduced but has also become dependent on the SPI bus speed chosen.

Figs. 3 and 4 are with the new architecture for Test2.vi, with SPI speed = 5MHz. Fig.3 looks the same as before, but in Fig.4 the gap is now found to reduce to 4msec (earlier it was 14msec). I used the new architecture in the HiL RT VI and looked at SCLK and MOSI; the figures are shown as Figs.9 and 10. The graphs look no different than the ones in Figs.3,4, implying that the use of a while loop instead of a Timed Loop has not changed the behaviour of the SPI VI. The gap between the spikes is 4msec, and this HiL RT VI was observed to run 4 times slower than real time.

Figs.5 and 6 are with SPI speed=50kHz in Test2.vi and Figs. 11,12 are with SPI speed=50kHz in the HiL RT VI ABS_Example.vi. Figs.5 and 11 are as expected, showing data transmission of 80 bits in 1.6msec at 50kHz. But Figs.6,12 are the interesting ones. The time interval between the start of one data transmission cycle and the next is 5msec (1.6msec of which are used for data transmission and the rest is idle time). At 5MHz and 1MHz the data transmission time was negligible compared to the idle time, and so it looked like the idle time was the key variable. But now with this graph, the key variable seems to be the time between the starts of consecutive data transmissions, a 'period' of the SPI bus. The HiL RT VI ran 5 times slower i.e. it took 5msec for one iteration that should have taken 1msec.

Figs. 7,8 are for the HiL RT VI with SPI speed=5kHz. Fig.7 shows 80bits transmitted in 16msec at 5kHz. Fig.8 shows a 'period' of 21msec between starts of consecutive data transmissions. The HiL RT VI ran 21 times slower than real time.

The conclusion seems inescapable that this 'period' of the SPI implementation is what makes the HiL RT VI not be realtime in very precise ways. Since only the ReadWrite subVI is inside the Timed Loop or While loop now, the reason for this idle time must lie therein. I am unable to determine if the FPGA VI has any role in this, because while data transmission proceeding at 5MHz suggests that the FPGA VI is OK, it is possible that ReadWrite waits for some acknowledgement from the FPGA VI (even though opening the reference and the configuring and the interrupt are happening outside the loop). The additional mystery is that the delay now depends on the SPI speed chosen; it can be as low as 4msec and as high as 21msec. And, lesser the data transmission time, lesser is the idle time necessary.

 

I looked at the ReadWrite VI hoping to figure out the reason for this idle time. It seems obvious that the Read and Write cases are not causing this since data transmission itself proceeds as fast as prescribed, judging from the DSO graphs. The Start and Wait until done cases each have 1msec Wait until Next Multiple included; I am not clear on the role of the  1msec “wait until next multiple” blocks (ordinary ones, not ones from the RT Timing palette) in start and wait until done cases. Does that mean (upto) a 2msec delay on each iteration of the Timed Loop? Also, given the PXI realtime hardware I have, would this code still work the same way if I made them wait 10usec instead of 1msec? That would mean a max delay of 20usec, on a Timed Loop period of 1msec.The Start case won't proceed to Write unless the FPGA declares itself to be idle. The Wait until Done case never exits the Wait until Done state except by termination of the ReadWrite VI itself. The dependence of the observed idle time on the SPI bus speed specified seems uncorrelated to any of this.

This is as far as I could get trying to analyse what was happening and how to fix the problem of the delay. Part of the code being proprietary, I couldn't attach it here but is emailed to support@ni.com with the subject line including "(Reference#1899627)".

It has been suggested that moving the SPI block out of the Timed Loop in the HiL RT VI and using a producer-consumer architecture for asynchronous communication would enable the VI to run in real time again, but SPI data would be available with some delay. Also, use of DMA FIFOs for faster data transfer between controller and FPGA would give much superior SPI speeds (current limit being 5MHz). I am yet to begin reading  up to explore these variations. If DMA FIFOs increase the speed I will certainly use them at some point.

Once again, I look forward to your views and suggestions and would be thankful if you could find the time for it. Thanks in anticipation,

Regards,

 Satyakant Kantarao

0 Kudos
Message 8 of 8
(3,577 Views)