Digital I/O

cancel
Showing results for 
Search instead for 
Did you mean: 

NIHSDIO_ATTR_SPACE_AVAILABLE_IN_STREAMING_WAVEFORM

Hello, I'm new to using the PCI-6561 board so I'm hoping that I'm just missing something.

 

I want to stream 4 channels of data at 32MHz (approx) double data rate.  My data is approximately 1GB in size so it won't fit in the memory of the board at one time.  So using the examples provided with the board I tried the following:

 

1. Setup for large size waveform on the board.

2. Initiate streaming.

3. Check how much memory is available on the board using NIHSDIO_ATTR_SPACE_AVAILABLE_IN_STREAMING_WAVEFORM, and if there is an amount equal to 5% of the initial memory usage on the board then send another chunk of data.

4. Keep doing this until my large buffer is done feeding to the 6561.

 

The problem is occurring with the answer back from the space available attribute.  While the code runs initially the value returned gets bigger and bigger as the board sends out the data (as expected).  When the amount of available data is around 4MB the code will send another chunk of the larger data.  However, on the next query of the space available the system says that there is much more data that is available.  Instead of the available memory going down because I just added another buffer, it says I have more available.  The value is going the wrong way after sending data.

 

This only seems to occur on a 32MHz clock and above.  When I go down to 16MB, the returned value from

NIHSDIO_ATTR_SPACE_AVAILABLE_IN_STREAMING_WAVEFORM seems to behave correctly (the value goes down when I add data).

 

Any thoughts would be appreciated.

 

Thanks, Curtis.

 

0 Kudos
Message 1 of 13
(5,715 Views)

Hi Curtisx,

 

There a couple of possibilities here. One thing that may be happening is that you are reading the available space before the next chunk of data is transferred. This may explain why you are seeing this behavior with the 32 MHz clock and not the 16 MHz clock. Does this number ever go back down when using the 32 MHz clock? Also, does the space available change if you change the size of the chunk you are sending it?

 

Also, try using the LabVIEW equivalent called "Space available in streaming waveform." This can be found in niHSDIO property node -> Dynamic Generation -> Data Transfer -> Streaming -> Space Available in Streaming Waveform.

 

If this behavior persists, please attach screenshots of what is happening and a simple VI that can reproduce the behavior.

Josh Y.
Applications Engineer
National Instruments
0 Kudos
Message 2 of 13
(5,697 Views)

Hi Curtis,

 

After reviewing the situation again, I believe this is expected behavior. What's happening here is in the time it takes to write 5% to the onboard memory, the board has generated more than 5% of data. Hence, once there is more than 5% of space available, there will always be more than 5% of space available because it will never catch up. In order to solve this issue, you could write larger chunks to the onboard memory. For example, if you waited until there was 50% of space available, then write 50% worth of data, you should read back that there is somewhere between 50-100% of space available. Writing one large chunk rather than multiple small chunks of data is more efficient because you do not have to constantly call the write function. I believe this is also why you see the behavior with the 32 MHz clock and not the 16 MHz clock. Hope this helps.

Josh Y.
Applications Engineer
National Instruments
0 Kudos
Message 3 of 13
(5,681 Views)

Hi Josh,

 

Thanks for the input.  And after watching some of the data that I have gathered, I am beginning to agree with you.  The problem; however, is that I don't think I will ever be able to keep up.  It seems to me that I will have to start feeding the board way before I have an available block of board memory.  In other words, I will need to start feeding 10M samples when the on board space is around 5M samples.  To further complicate things, my target clock is 32MHz double data rate.  We have a very strong computer (all of my data is in memory, fast processor rate etc).  Is there anything on the PCI bus that can slow me down (like other cards)?  Also, I noticed the help file talks a little about setting up a DMA.  Can I set up a DMA from the computer to the board?  Is there example code of that?

 

Really appreciate the help.

 

Thanks,

 

-Curtis.

0 Kudos
Message 4 of 13
(5,678 Views)

Hi Curtis,

 

Here is something to consider.  For generation on the 6561, it uses a default data width of 4.  This means that you will have 4 bytes of data per sample on the board.  If you run your data rate at 64 MHz, that means the amount of data across the backplane is 64 * 4 = 256MB/s of data.  PCI bandwidth for all boards is 130MB/s theoretical, with actual rates around 110MB/s.  The issue here is that you cannot keep up with the generation at that rate because the form factor of your card will not allow for that much data to stream across to the memory.

 

If you can keep the data below or around that threshold, you should see better results.  The board uses DMA by default.  If you need faster data rates, you can consider moving to the NI 6587, a FlexRIO Adapter Module with an FPGA backend.  You would need to go to a PXIe system, but your bandwidth would be increased vs PCI/PXI.

 

Besides that, other ways to reduce memory usage will be to use scripting to repeat parts of the waveform that are more repetitive in nature.  If your sequence has any other repetitions that can be reduced and scripted, you can sort of hack your way to an accpetable solution.  

Kyle A.
National Instruments
Senior Applications Engineer
0 Kudos
Message 5 of 13
(5,674 Views)

Hi Kyle,

 

We are using a 6562 board.  The default data width is 2 which would put us at 128MB/s.  It seems to me that this board does not allow for a data width of 1 during double data rate waveform generation.  This would make our problem worse.  We are feeding out 4 channels of data; 12-bits of the 2 wide channel data is being wasted.  So for every 4 bits of my data, I have to pass 16 bits of data to the board.  But, that doesn't seem to matter since I can't keep up with 32MHz SDR clock.

 

My testing has shown that we are writing approximately 19M samples per second (or 38 MB/sec).  I'm disappointed that I can't get more out of the PCI.

 

0 Kudos
Message 6 of 13
(5,661 Views)

Ah, I see.  So using the 6562, if you are in SDR mode, the data width is in fact 2 bytes wide.  Using DDR, the data width is reduced to 1 byte wide, according to this KB on Data Width.  What error did you receive when running the board in DDR mode and using 1 byte as the data width?

 

Also, is the 38MB/s your benchmark for the maximum throughput you have on the system?  That can be easily achieved by PCI, it may be good to take a step back and consider a few other requirements for streaming.

 

I am going to recover some high level topics to keep the knowledge in the thread continuous, but the way streaming works is it's always as fast as it's weakest link.  For instance, having the PCI bus run at around 110MB/s is great, but if you are loading data from your hard disk drive at 40MB/s, it will cause your max streaming rate to be the slower of the two.

 

There are benchmark utilities to tell how fast your hard drive is and also your RAM speed.  For the hard drive, there is a LabVIEW example under Optimizing Applications » General. There, you can run the Read From File Speed Test.vi or Write To File Speed Test.vi examples to benchmark how fast your HDD is.

 

But if you stream from RAM, you will get speed improvements, and this can be benchmarked as well.  If you go to ni.com/streaming and also check out the HSDIO generation examples, in the advanced.zip file you can find the HSDIO stream from memory benchmark utility.  This will benchmark how fast you can dump data from the memory of your PC to the board.

 

All this to say, each PC/Motherboard is created differently with different pipelines in place to allow for this data to be transferred.  Benchmarking will tell us for sure how the computer performs when it comes to streaming.  If you could report back your performance, we may be able to see where the bottleneck lies.

 

At 32MHz DDR, and assuming a data width of 1, you should see 64MB/s of data going across, and I think this is very easily achievable.  I would recommend waiting for 8MB to empty from the buffer before reloading, but 4MB should be ok as well.  If you still don't see it working, try using the streaming examples at the location I posted above and see if the results are different.  Hopefully this will lead us to the correct solution for your application.  Let me know how I can assist you further, thank you.

Kyle A.
National Instruments
Senior Applications Engineer
0 Kudos
Message 7 of 13
(5,654 Views)

Hey Kyle,

 

I'll give the tests a try.  I have been coding in C for my work, and will have to get LabView to run the tests.

 

Will let you know.

 

Thanks.

 

-Curtis.

0 Kudos
Message 8 of 13
(5,637 Views)

Hi Kyle,

 

Wanted to let you know I haven't spent much time with the system.  I'm new to LabView / VI's and the VI you suggested isn't quite running cleanly, so I'm waiting on our LabView guy to help me out.

 

-Curtis.

0 Kudos
Message 9 of 13
(5,619 Views)

Thanks for the update, let me know if I can be of further assistance.

Kyle A.
National Instruments
Senior Applications Engineer
0 Kudos
Message 10 of 13
(5,612 Views)