LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Reading 12 daq cards with 16 channel each


@ahmalk71 wrote:

Array replace operation works on a predefined fixed size array so it do not allocate memory and therefore it should be less demanding "I think". 


Caveat: I'm in speculative territory here (for me at least).

 

I agree that using a predefined fixed size array is *much* better than an approach that could require new allocation of memory.  But the thing I was talking about was the CPU involved to *copy* the array elements from the array/waveform returned by DAQmx Read over to the fixed array's memory space.

 

The approach I previously described (using Queues under Windows at least) is even better in that it basically just transfers ownership of the dataspace without needing to copy array elements at all.

 

On a different note, one common way to deliver data via TCP/IP is to use the "Flatten to String" function on the send side and "Unflatten From String" on the receive side.  It strikes me that it might be more efficient to use a Typecast instead.  Couldn't guarantee it though, and it also seems like a more fragile approach to be used only if really necessary.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 11 of 31
(580 Views)

@Kevin_Price wrote:

  It strikes me that it might be more efficient to use a Typecast instead.  Couldn't guarantee it though, and it also seems like a more fragile approach to be used only if really necessary.

-Kevin P


Typecast ALWAYS makes a data copy. You need to be careful when using it. I looked into this a long time ago, not sure if it is still valid or not, but I remember the following.

 

Snap36.png

mcduff

Message 12 of 31
(575 Views)

Thanks for the correction.  Wow!  I've been carrying that misinformation around with me for a really long time.  I thought it was like casting in C, which says, "don't touch the bits, just interpret them as the specified datatype".   

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
Message 13 of 31
(571 Views)

I have tried several way on how to prepare the data for sending it to the host windows PC so the data can be treated in an easy way with no reshaping or type casting etc on the windows machine. The data is read as 2D DBL array which immediately is converted to SGL and Reshaped to 1D array since RT FIFO can't handle 2D arrays. Why RT FIFO and not queue, well, Queue can't read and write simultaneously and that will cause some jitters in the producer loop which I for some reason don't want to have ;-). I'm sending the data by network stream so i don't need to flatten and unflatten the data, less thing to do :-).    

0 Kudos
Message 14 of 31
(562 Views)

@ahmalk71 wrote:

I have tried several way on how to prepare the data for sending it to the host windows PC so the data can be treated in an easy way with no reshaping or type casting etc on the windows machine. The data is read as 2D DBL array which immediately is converted to SGL and Reshaped to 1D array since RT FIFO can't handle 2D arrays. Why RT FIFO and not queue, well, Queue can't read and write simultaneously and that will cause some jitters in the producer loop which I for some reason don't want to have ;-). I'm sending the data by network stream so i don't need to flatten and unflatten the data, less thing to do :-).    


Is the system just RT or RT and FPGA? From what you said just now, I am assuming RT only as I don't know how you get doubles out of FPGA.

 

I would stick with raw integer data, and either scale it before saving the data, or save the scaling information in the file. Going from double to single will incur a buffer copy, maybe it's not an issue.

 

Can you post part of your VI?

 

mcduff

Message 15 of 31
(557 Views)

@Kevin_Price wrote:

You get a kudo for being my 500th kudo. 🙂

 

mcduff

 

Message 16 of 31
(557 Views)

Like mcduff, I question the conversion from DBL to SGL due to the necessary data copying.

 

I *do* understand that it cuts the data bandwidth to 1/2, but why stop there?  If it's important enough to incur the cost of converting to SGL, why *wouldn't* you use one of the unscaled integer versions of DAQmx Read (also suggested by mcduff)?

     Not only would that cut your data bandwidth down to 1/4, you could *also* get rid of the data copying.  I'd plan to maintain this integer representation across TCP/IP, and leave any rescaling to the receiving side of the connections.   You'll just have to work out a scheme to deliver the scaling parameters over to the receiving side.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
Message 17 of 31
(550 Views)

The file format that our costumer use support only SGL, Changing the data to SGL in the beginning of the acquisition will save us unnecessarily scaling downstream, and since it hasn't been an issue I haven't given it much thought. I have included a snippet on how I want to change the code to two producer/consumer loop pairs. This is just a quick fix to show how I was thinking, I haven't put the sync details and other stuff, so please don't bother about it. 

Regarding the bandwidth reduction using raw data instead, we use NI-4497 and they are using 24 bit ADC, correct me if I'm wrong, using I16 as output wouldn't that reduce the dynamic of the signal?

 

Best regards

Ahmed

RT ACQ.png

0 Kudos
Message 18 of 31
(525 Views)

It just RT no fpga and the DAC cards used are Ni-4497 which have 24bit DAC. The reason I didn't used the I16 raw output is to avoid reduction in signal dynamic. If I have understood it correctly.

0 Kudos
Message 19 of 31
(523 Views)

This sounds like a very interesting application.

 

I just came across this thread and my first advice would be if it doesn't give you any problems I wouldn't change speculatively. Splitting the loops may help distribute load but just moves the complexity to where you need to reconstruct everything.

 

However what I would suggest trying is NOT using a timed loop (I've assumed this from some comments - but sorry if I'm wrong). I know this probably sounds backwards but everything inside a timed loop is forced to a single thread, therefore, nothing in that loop happens in parallel.

 

Instead I would convert to a while loop and set that VI to a higher priority. That way it can multi-thread the conversions while still allowing the DAQ to preempt everything else.

James Mc
========
CLA and cRIO Fanatic
My writings on LabVIEW Development are at devs.wiresmithtech.com
Message 20 of 31
(510 Views)