01-17-2017 06:25 AM
Hello,
I have been thinking about this problem for long and made several searches on the web but I dont seem to find what I am looking for.
I have developed a model in Labview control and simulation module and I would like to implement the model in LabVIEW FPGA.
Does anyone knows if there is a proper method of doing this?
My approach is shown in the snippet attached, I am not sure whether I am doing something acceptable, and my model is also attached. I am programming on 7853R Card
Any suggestion will highly appreciated.
Kind Regards,
Opuk
01-18-2017 05:02 AM
You didn't attach the FPGA VI but only a screenshot so the following is based on a lot of guessing
- Your input data and result values are integers and you are comparing to threshold values in the order of 10E-6, so there is likely something wrong. You should scale your data to match the range of your (DBL) data and result in your SIM VI. Use fixed-point (FXP) format to keep track of your values range and resolution.
- There are too many (red) coercion dots on your diagram, so it'll be difficult for you to keep track on what's going on. Especially coercion dots on both input of a Select (!?) and on the output terminal of your adders are going to give you problems. The best is to keep full control of your coercions with appropriate casting and format configurations. Try to get rid of all red dots
- The sequence structure in your VI is not needed, the LabVIEW flow diagram will take care of it all.
- You are multiplying positive values with negative constants and cast it to U32. This will always return 0.
- Looks to me like you are doing some sort of in-range comparison but use sequential comparisons. I think your code can be greatly simplified if you use the In Range and Coerce primitive.
At the end you can always test your VI on your host machine to confirm it works as expected (returns the same results as your SIM VI) before spending time compiling to FPGA.