From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

Driver Development Kit (DDK)

cancel
Showing results for 
Search instead for 
Did you mean: 

x-series DMA -- single chunky link for finite SGL -- possible?

Solved!
Go to solution

 

ok, i guess i stumped you guys on the previous question regarding aout fifo width.  here is another one.

 

can i have a ChInCh SGL program that has a single chunky link, with the done flag set?  

 

i compose the SGL myself (not using the DDK chunky link classes).   I have LinkChainRing and ReuseLinkRing SGL working fine.  However, if I want to run a finite SGL with one chunky link, I hang in tCHInChDMAChannelController::start waiting for the link ready bit.

 

so there appears to be an issue with starting a finite DMA SGL on a chunky link that has the Done bit set -- is that right?  Is there a way around this?

 

thanks,

 

--spg

 

------------------
scott gillespie
applied brain, inc.
------------------
0 Kudos
Message 1 of 8
(10,258 Views)

Hello Again!

 

I tried your experiment and it worked great for me.  I used aiex3 and created a new topography in CHInCh\dmaProperties.h.  I then made sure that this new topography was added in each place where the topo logic was used in the other CHInCh files.

 

Using the topography with aiex3 was enough to get me 2048 samples into the buffer...then the FIFO overflowed.  Obviously the reason for this is that the measurement is continuous and the dma buffer was a single shot SGL of size 2048 samples.  The modified DDK is attached.

 

 Please note that this is not an official example and is a quick hack job for testing.

 

I hope this helps.

Message 2 of 8
(10,240 Views)

steven t --

 

i just went back and tested my implementation and find that it is working now with a single link as well.  looks like maybe there was another problem with the SGL that I had previously laid out (i've rewritten some of the memory mapping code since then).

 

thanks for looking into this for me, and my apologies for sending you on a wild goose chase.

 

cheers,

 

spg

------------------
scott gillespie
applied brain, inc.
------------------
0 Kudos
Message 3 of 8
(10,236 Views)

 

hmmm, now that i look closer,  i am still having a problem with single chunks, but only if the link has a single transfer.

 

if you get another chance, can you try reducing your pagesPerNode value to 1?  in tScatterGatherDMABuffer::initialize:

 

      case kSingleLink:

         numNodes = 1;

         pagesPerNode = 1;  //  <-----------

         break;

 

thanks again for your help,

 

--spg

 

 

------------------
scott gillespie
applied brain, inc.
------------------
0 Kudos
Message 4 of 8
(10,228 Views)

to clarify further, this works:

 

[first chunk size:48, address:000000007d99a000]


000000007d99a000) 80000000 00000000 0000000000000000
000000007d99a010) 00000000 00000190 000000018665d000
000000007d99a020) 00000000 00000190 000000018665d190

 

and this does not (hangs waiting for Link Ready flag)

 

[first chunk size:32, address:000000007d99a000]


000000007d99a000) 80000000 00000000 0000000000000000
000000007d99a010) 00000000 00000320 000000018665d000

 

 

------------------
scott gillespie
applied brain, inc.
------------------
0 Kudos
Message 5 of 8
(10,227 Views)
Solution
Accepted by topic author spg

Hello,

 

I'm seeing the same behavior in my tests too.

 

Since the DMA controller processes the entire chunky link it doesn't make a large difference in performance if you have one or two page descriptors there (ignoring the Link Ready bit hang).  Is there a reason why you must use one page descriptor in the chunky link?  It sounds like you should use the Linear DMA Buffer.

 

I'll be looking into this more, but understanding why you want to use this method will help me. Its fine if you are just trying to further your understanding of the X Series DMA.

 

Thanks,

Steven T.

Message 6 of 8
(10,202 Views)

Steven T --

 

Ok, thanks for verifying that.

 

>> Is there a reason why you must use one page descriptor in the chunky link?

 

Not necessarily, however since I am constructing my own SGL's, I do need to know exactly what I can and can't do.  So when I see behavior like this, I first want to understand if I am doing something wrong, then determine a workaround if it is a hardware limitation.

 

As I am writing a driver that supports several different clients, I need to provide a generalized interface that can handle any request.  For example, I need to know that if one of my clients requests a single byte transfer, the driver has to fail gracefully (or implement the request without using DMA), and not hang 🙂

 

Having you verify this limitation (if it is that) is extremely useful to me, since I can now deploy the workaround (add an extra transfer for any single link chunky, use a direct write or FIFO preload for any single byte transfer) and not continue to wonder if I have missed some other essential register setting or flag.

 

Thanks again, and if you do find out anything more, let me know.

 

cheers,

 

spg

 

------------------
scott gillespie
applied brain, inc.
------------------
0 Kudos
Message 7 of 8
(10,198 Views)

Hello,

 

Your question was asked of the hardware developers and they were able to confirm the behavior you are seeing.  If you need some more help working around this behavior, please let us know.

 

Thanks,

Steven T.

0 Kudos
Message 8 of 8
(10,179 Views)