LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

convert the speed of the while loop on MyRio 1900 from the millisecond range to the microsecond range

Solved!
Go to solution
Highlighted

Hi guys
I have used MyRio 1900 to design my project (a real time operating system). The issue is that i have used MyRio Vi. Project which can run only in the millisecond range [please see the link [ https://learn-cf.ni.com/teach/riodevguide/code/rt_measure-loop-time.html]]. I need to run my project in the microsecond range. So i am planning to convert my project to FPGA MyRio Vi. Project.

I have two questions
1) Is there any way to run my previous file (MyRio Vi. Project) in the microsecond range?
2) is there any way to run my previous file (MyRio Vi. Project) on a new FPGA MyRio target (FPGA MyRio Vi. Project) without rebuilding everything from zero?

Thanks in advance

0 Kudos
Message 1 of 14
(913 Views)
Highlighted

1) No forget it. The CPU even on a highpowered desktop machine would never be able to guarantee you predicitive microseconds reaction time and the ARM CPU in the myRIO is considerably less powerfull.

 

A thumb of rule:

 

- Desktop OS: non guaranteed reaction times of typically 10s of ms (but an occasional hickup of 100s of ms to even seconds it definitely possible).

- Realtime OS: guaranteed reaction times in the ms range

- FPGA Hardware: reaction times of below us are very possible and if you get the circuit to route and run, your reaction times will be always within the bounderies set by your design (With synchronous design pretty easily predictable. With asynchronous designs it can be a challenge to proof anything, they are usually faster but can have nasty transitional effects when the asynchronous circuits are changing logical states). Most designs you can do in LabVIEW FPGA tend to be almost always fully synchronous.

- Dedicated hardware circuits: ns range

 

2) You can start dragging your VI into the FPGA target. Most likely it will cause many errors as the FPGA target is very much limited in what LabVIEW nodes it can execute. And of course things like DAQmx are non-existent in the FPGA target. You will need to completely change that to use direct FPGA IO nodes (or if you used I2C or SPI, use the according FPGA counterparts). What worked in the software environment can't just be taken over to FPGA like that.

Rolf Kalbermatter
Averna BV
Message 2 of 14
(878 Views)
Highlighted

Great explanation, thank you very much

 

I have another related question. Since we can't find MathScript Block in LabVIEW FPGA, how to write a code in LabVIEW FPGA?

0 Kudos
Message 3 of 14
(836 Views)
Highlighted

@ashrafayasrah wrote:

Since we can't find MathScript Block in LabVIEW FPGA, how to write a code in LabVIEW FPGA?


You use plain graphical code using the tools available in the FPGA specific palettes. What are you trying to do?


LabVIEW Champion. It all comes together in GCentral GCentral
What does "Engineering Redefined" mean??
0 Kudos
Message 4 of 14
(831 Views)
Highlighted

@ashrafayasrah wrote:

I have another related question. Since we can't find MathScript Block in LabVIEW FPGA, how to write a code in LabVIEW FPGA?


LabVIEW FPGA is a pretty advanced topic.  You should be quite familiar with "standard LabVIEW" and know how to do things (like acquire data) without ever using an Express VI.  There are no short cuts in the FPGA world -- indeed, in some ways, it's like "going back to Assembly Language, without Macros", where you "count the wires" and "count the nanoseconds".

 

Did you know that even the simplest of FPGA programs can take a minimum of 10 minutes to compile?  That means that when you want to change something and see if it works, you change the code, wait 10 minutes for it to compile, then find the next problem to fix ...  And any routine of moderate complexity can take far longer for compilation.

 

Bob Schor

0 Kudos
Message 5 of 14
(812 Views)
Highlighted

Thanks for answering. i am doing a battery management system using a multilevel inverter.


It will be too difficult to build my convert my algorithm from code to graphical code. Is there any way to keep my code somewhere then send/receive the data for/from it?


Actually I have a real-time operating system.
So, i need to send the data from Labview FPGA (I got the data from the real system (prototype)) to process it by the algorithm (in normal LabVIEW, I have put my algorithm within ManuScript Node to do this stage) then resend the outputs of the algorithm to Labview FPGA to control the switches' signals as ON or OFF through MyRio 1900 (i have many MOSFET switches) on my real prototype.

0 Kudos
Message 6 of 14
(776 Views)
Highlighted

Thanks for the guidance

0 Kudos
Message 7 of 14
(773 Views)
Highlighted

Well, transfering data from FPGA to the RT part and back takes time. If you have to do that for every single sample seperately you are very likely not going to get even close to 10kS/s data rate.

 

If you can do it in batch mode, lets say reading 10 samples or so, send them to the RT side, do some analysis and then send the reaction back to the FPGA to act on some IO accordingly you have a chance to succeed. You would need to use DMA FIFOs for each of the two channels and be pretty digilent about handling the data appropriately.

 

But it may be actually a lot more work to adjust your algorithme to account for the batch mode processing than to bite the apple and convert your math into LabVIEW code. It all depends on what kind of math you are doing in there. If it is all about plus, minus, multiplication and division it shouldn't be to difficult, but expect to do some serious analysis anyhow. You either have to convert all the math to integer data space or use fixed point floating point arithmetic to be able to run it on the FPGA. That for sure will require some serious consideration if you are currently operating in floating point arithmetic.

 

If you do advanced analysis stuff or things like trigonometric or non-linear algebra you are however not likely going to succeed. Doing these things in FPGA hardware requires some serious mathematical theory knowledge or may be basically impossible.

Rolf Kalbermatter
Averna BV
0 Kudos
Message 8 of 14
(765 Views)
Highlighted

@ashrafayasrah wrote:


It will be too difficult to build my convert my algorithm from code to graphical code. Is there any way to keep my code somewhere then send/receive the data for/from it?


For me, Mathscript code is more difficult. Can you describe what the algorithm actually does? In 25 years of advanced LabVIEW programming, I have never used Mathscript and never had any need for it. Even if you run in on RT, you should replace it with graphical code for performance.

 

Are you using old Mathscript code written by somebody else? Mathscript is typically more like a bandaid solution to quickly re-use existing Matlab code where performances does not matter.

 

Battery management is typically a relatively slow process. Can you explain why you think that a battery management system requires such high speeds?

 


LabVIEW Champion. It all comes together in GCentral GCentral
What does "Engineering Redefined" mean??
0 Kudos
Message 9 of 14
(749 Views)
Highlighted
Solution
Accepted by topic author ashrafayasrah

It's a pretty advanced, and not yet well traversed, FPGA workflow, but if you have the requisite Mathworks® Software, you can try using their tools to convert your design into HDL, and then bring that into LabVIEW FPGA as a CLIP or IPIN item.

 

https://www.ni.com/en-us/support/documentation/supplemental/20/matlab---simulink---and-labview-fpga-...

 

Edit: As a warning, some of the HDL that I've seen come from this workflow is pretty fabric intensive, and the FPGA on the MyRIO is not very big...

Cheers!

TJ G
Message 10 of 14
(736 Views)