Showing results for 
Search instead for 
Did you mean: 

Read Excel File and Write to FIFO

Go to solution

I would like to read in an excel file (csv) with four columns of data each with 250,000 INT16 elements in a Host vi and write that data to a FIFO that can then be read by my Target (FPGA) vi.  Any thoughts or code snippets to get me started is greatly appreciated.




0 Kudos
Message 1 of 12

Hi John,


@johnsoja wrote:

Any thoughts or code snippets to get me started is greatly appreciated.

What have you tried and where are you stuck?

Mind to attach your current project?


@johnsoja wrote:

I would like to read in an excel file (csv) with four columns of data each with 250,000 INT16 elements in a Host vi and write that data to a FIFO that can then be read by my Target (FPGA) vi. 

So there are 2 (or 3) problems:

  1. reading a spreadsheet file
  2. writing to a FIFO
  3. using a realtime VI to handle communication between your Windows(?) computer and your FPGA target

All 3 items are explained in example VIs/projects coming with the example library in LabVIEW! (So I ask again: what have you tried so far?)

Best regards,

using LV2016/2019/2020 on Win8.1/10+cRIO
0 Kudos
Message 2 of 12

I haven't started the code yet.  I have done numerous projects writing high speed data to a FIFO on the target (FPGA) side, reading it on the Host side, and writing it to disk for post processing.  


I have not done any projects going the other direction (File on Host, writing to Host FIFO and picking up on Target side), so was just looking for any basic direction.  


Are there specific examples that you can recommend that are tailored to each of the steps I need to take?  Even better, do you know of one example that does all of these basic steps in one?

0 Kudos
Message 3 of 12

Several points regarding your question in the previous Reply:

  • A "CSV" (Comma-separated Variables) file is not an "Excel" file -- it is an form of "Delimited Spreadsheet File" that uses "lines of text" to delimit "rows" and a designated character (two popular choices are Comma, hence the name CSV, or a <tab>, which can make the text "look like" it is organized in "columns".
  • LabVIEW's "Read Delimited Spreadsheet" reads text files that are "separated" into Rows (by <EOL>) and Columns (by "," or <tab>, or maybe something else) into a 2D Array of "something" (numeric, string).
  • By default, Read Delimited Spreadsheet uses <tab> as the default delimiter.  You can change it to a Comma (right-click the function and look up its on-line Help -- it will show you where this is, on the bottom of the Function).
  • This will get your CSV file into memory as a 4 column x many row Array of <numeric or string>.  Note that you can also get it read in as a 4 row x many column Array (use the Transpose input).
  • I'm not sure how you get from the Host to the Target (where I presume the FPGA resides).  I do this using Network Streams from my PC to a cRIO, and then the cRIO handles the FPGA FIFO for me.  [Like you, I also normally go from FPGA to cRIO to Host to Disk].

Bob Schor

0 Kudos
Message 4 of 12

Thanks for the response.  I've done some work on this and have it somewhat working, but need some additional insight.  Hopefully I've got it far enough along that it will make sense what I'm trying to accomplish here.  I've attached screenshots of a snippet of the relevant host side vi code and the host side data display.  


Host to target FIFO size is currently set to 1023 U64 elements.


I start with a tab delimited text file of 4 columns of data as shown in indicator 16bit integer that can be as deep as say 250,000 rows.  I massage that data producing a single column array of all of the data points as shown in indicator output array.  I then decimate the array and use join numbers to produce a single U64 element that represents one row of the four column data and write each to the FIFO.


This is where I'm not sure where to go.  To support large data sets, I need to write into the FIFO in blocks of data that will be small enough to fit.  In the example screenshots, this is a very small data file so that Labview would let me run.  For large data sets, it obviously complains that the data is larger than the allowable FIFO size.  I know I can further increase the FIFO depth, but still not enough to support say 250,000 elements.  Thus I need some method to read in smaller chunks at a time.  Also, I would like it to only run this writing of the data file contents to the FIFO once.  Currently it repeats the writing of the data file over and over again.  


I've got the FPGA side working (reading the FIFO and using the data the way that I want on that end).  


Any help is greatly appreciated!



Download All
0 Kudos
Message 5 of 12

I am not going to bother to open the .jpg files -- it will only get me angry and upset (imagine if I was asking you for help on a complex Matlab or C++ program, and attached a picture of a 1000-line printout of my code, instead of attaching the code, itself ...).


Since I don't have code to examine, let me ask some questions and make some observations that might (or might not) be appropriate, as I don't (yet) fully understand what you are trying to accomplish.


Here are my assumptions:

  • You mention a Host, which leads me to believe you are running LabVIEW on a PC, possibly with a TCP/IP connection to a LabVIEW RT-Target processor (a cRIO?) that possesses an FPGA to allow fast, highly parallel asynchronous processing of data.
  • Your dataset to be processed is around 250,000 "elements".  It appears that the data might be in the form of a 2D Array of 4 columns and many rows (250,000?).  You appear to be "massaging" the data (I presume in the RT-Target side) to combine the 4 columns (of, perhaps, U16 integers) into a single U64 for purposes of sending them through the FIFO to the FPGA.
  • The nature of the FPGA processing is unclear.  What does seem to be clear is that the FPGA cannot process the entire dataset, but needs to work on a subset (say 1024 elements at a time), but no mention of what kind of "output" it produces.  Does it return a single number, does it "transform" the 1024 elements into a "response" (which could be another 1024 elements to be sent back to the Target, and ultimately to the Host)?
  • One of the points I make when I tell students about LabVIEW is the importance (in LabVIEW) of Time as an integral part of the Language, as well as the Three Laws of Data Flow that make LabVIEW capable of "natural" Parallel Processing.  It is not clear (to me) how Time enters into this situation.  Are the 250,000 data points being continually generated and you are trying to process them "as they arrive"?  Do the data represent equally-spaced (in time or in space) samples, with order important?  (I'm pretty sure the answer must be "Yes", but it would be nice to know something about the nature of the data).
  • What is the reason for not doing all of the processing on the Host?  The Host surely has access to the data, has lots of memory (and, if you need more, PC memory is fairly inexpensive, and external storage is similarly cheap and fast).  I'm guessing there is a "need for speed", so knowing a bit more about the nature of the processing would be helpful.

Here are some suggestions.  I'm going to assume that you have a file containing 250,000 "samples" that you want to "process as fast as possible" by doing some "transformation" on the data (perhaps some form of filtering, or time-varying FFT, etc.).  I'm also going to assume that you can "serially" process your data by dividing it up into blocks of 10,000 "samples" and processing them one at a time, perhaps having two in memory at the same time so that you can "process across the 10,000 sample block boundary".  I'm not going to worry about "block transitions" at this point, however ...


  • Start by having the Host stream Block 1 to the Target.  
  • The Host next streams Block 2 to the Target, and then waits to receive "Processed Block 1" from the Target.
  • The Host continues in this manner until it streams Block "Last" to the Target, then waits to receive the final "Last-1" and "Last" Processed blocks back.  This should keep the TCP/IP channel between Host and Target working efficiently.
  • On the Target side, the Target takes the 10,000 samples (which I'm assuming is 10,000 rows of 4 columns of U16) and transfers 1000 samples at a time (or 1024) as a 1000-element Array of 4 U16.  Don't waste time compressing 4 U16s into a U64 on the Target and reconstituting them as 4 U16s on the FPGA -- configure the FIFO directly.
  • Let the FPGA "do its thing".  I presume it transforms your 1000 rows into an altered 1000 rows, but I have no idea if this is correct because I can't see the code.
  • The FPGA uses another FIFO to send the processed data back to the Target, which then sends it on to the Host using a Network Stream.
  • Your limiting resource is the FPGA.  Your Target loop looks like "Read Data, send to FPGA, receive from FPGA, write back to Host" and repeat until done.  Data should keep flowing from Host to Target (Network Streams, TCP/IP) to FPGA (FIFO) to Target (FIFO) to Host (Network Streams) with the three Processors running independently and asynchronously, gated only by the presence of Data ultimately supplied by the Host "as fast as practical".

For more details, provide more details by attaching LabVIEW Code (files with extension .VI) with enough detail to permit understanding (so TypeDefs and sub-VIs are important, as well).  In a pinch, you can compress the folder holding the Project and attach the resulting .ZIP file.


Bob Schor

0 Kudos
Message 6 of 12
Accepted by topic author johnsoja

Hi JJ,

A couple of ideas.


1) There are two sides to every FIFO, whatever direction - a host side and an FPGA side. The FPGA side is limited but the the host size resides in host memory so can be grown further. This is done by using the FIFO.Configure method ( Your full data file is around 500kB so it would be reasonable to just increase the host side to hold all your samples and let the DMA engine manage writing the individual chunks. This assumes a reasonable cap on the data size.


2) If you can't guarantee the maximum size (or want to make it robust against any size) then you will need to create a loop which just takes the next n rows from the data and writes them into the FIFO up to the maximum size. There is no trick to this in LabVIEW - just use array subset to get the current subset and you need to track how far into the array you have gotten.


Repeating depends on more than we can see in the screenshots - if it is in a loop then it will keep writing the file, you need to extract this code from any loops it is in to make it run only once.

James Mc
CLA and cRIO Fanatic
My writings on LabVIEW Development are at
0 Kudos
Message 7 of 12



Thank you for your responses.  For the record, there is some company proprietary data associated with my VIs, which is why I am unable to attach the actual code.  I do appreciate your frustration when trying to help without all of the code shown. 


As for my setup I'm using, I have a PXIe-1085 chassis with a PXIe-8880 controller running Labview 2017 (Host) with a PXI-7954 RIO FPGA Module Target.


Your understanding of the data file contents and format is correct.  It is a 2d array of 4 columns and many rows (minimally 250,000 and could be more).  I combine the 4 columns of 16 bit data into a single column of 64 bit for writing to the FIFO on the host side.


The RIO FPGA vi code reads each element from the FIFO data at a 1kHz rate (every 1ms) splits it back into the 4 16 bit words and transmits them over a UART interface, which is also implemented in the FPGA Target vi code.  I have the FPGA Target vi code that reads each FIFO element and transmits it over the UART working.  


The 250,000 elements are stored in a text file.  The goal is that the Host would read in all of the elements in the data file and write them to the FIFO.  The Target (FPGA) side would read sequentially each element and transmit each out the UART at 1kHz rate.  I'm not doing any processing of the data on the Target side and then sending back to the Host.  It's a one way data path : Host to Target via FIFO and out of Target UART to a separate piece of hardware.  


I believe my main problem was that I thought I was limited as to how many elements that I could write into the FIFO on the Host side.  If I'm interpreting James' response correctly, it seems as though I can write all of my data (250,000 elements for example) to the FIFO on the Host side if I use the FIFO.Configure method.  Is that correct?  Then the Target side would just read thru the entire FIFO contents element by element at 1kHz transmitting each out the UART?  Also, I would like for the Host to just write the entire data file contents into the FIFO once and the Target to read and transmit all of the elements once and then stop.  I believe currently it may write the data on the Host side over and over and the Target side would keep reading endlessly.  Is there a way to prevent this and only run thru the data file contents once?


Thanks again.



0 Kudos
Message 8 of 12

Hi JJ,


Once the target has read all the elements you have written it will just start giving a timeouts so you will just need to handle that case on the target. So I think it should give the behaviour you want by default.




James Mc
CLA and cRIO Fanatic
My writings on LabVIEW Development are at
0 Kudos
Message 9 of 12



Thanks so much for your help.  My code is now doing exactly what I want.  The key enabler was understanding that I could set the host side FIFO to a value much larger than the Target side and of course simply moving the FIFO write on the Host side outside of the while loop.


Thanks again to you and Bob for taking the time to help me out.  Much appreciated!



0 Kudos
Message 10 of 12