05-27-2015 11:38 AM
I need to wright the student work on analog sorting of sygnals using neural netwoks in Labview, with application of fpga. How can I do it. Especially how to use analog sygnals sorting with probably binary schematics. I have the approximate scheme, where I have such elements as:
Adder (just 3-4 inputs), integrator (integrating function), multiplier functions, step function (switches probably). And of course the sources of electrical permanent sygnals on input and I have to produce this signals for output in descending order. How to use such design in Labview, what are just binary and how to be with analog-binary transformation irrespectably of fpga. But what is the role of FPGA here, especially how I can apply FPGA module -- I would be glad to know as I have some basic knowledge in verilog programming. The initial question (I was nor got the answer) is how many signals could be as the minimum to have this design appropriate for some study or research purposes, as the one adder make the sum from all inputs then passed through the step function element to this one adder (summator)?
Would you make any suggestions or references on similar projects, especially on the complexity of such scheme?
05-27-2015 08:53 PM
Ant,
There was a lot in your post but I will try to answer some of the bigger questions as I see them. First, it seems that you are newer to LabVIEW FPGA. If you are new to LabVIEW in general I would suggest going through the 3 or 6-hour tutorial but if you are familiar with LabVIEW but not FPGA start here.
http://www.ni.com/tutorial/14532/en/
I'm not sure exactly what you mean by analog-binary transormation, but the number to boolean array function might be what you are looking for.
http://zone.ni.com/reference/en-XX/help/371361L-01/glang/number_to_boolean_array/
The last bit is a bit complicated because you have to remember that you are the expert of your own application. We are not sure what you are doing for research so it is really hard to make suggestions on what you should be doing. One general thing I would suggest is to figure out your algorithms on paper first, then transition that logic to LabVIEW running on your computer then move over to FPGA. As you get more comfortable with LabVIEW it will be easier to go right from paper to FPGA.
05-28-2015 09:49 AM
Yes I am aware of Labview basics and something more. But the link for tuttorial FPGA labview is not very convenient as it is with embedded videos. I would like to use just text, or at least the videos that I would be able to download from youtube or so.
The main idea here is that it is kind of k(wta) alorithm, despite ther is probably enough transformation of it. So I do not know should I use some DAQ here as it should be simulation (without any chip). Theoretically it should be different electrical sygnals from direct current sourse that I should sort according to their voltage. But how should the output look like, that also should show timing characteristics, that is its efficiency. So here I do not know how transform from for example 7 V (or 70 V) to binary, or it should use some bit wise technique then if the bit countb is the same, check the bit 0 or 1, if not -- the count of it. Except it --just yesterday i began to read about DAQ--and it is told that it relates to analog binary transform --but without any device (then without DAQ)- -I need to simulate the different voltage in some way??Despite there is need to define first winner so then use feedback. And why FPGA? Maybe it should be addirional way to check this algorithm? Additionally it is in neural network context -- could you refer to appicable resources to such kind of task?? I found some toolkit here but it is in labview 14--can you transform in 12 version one??
05-29-2015 12:02 PM
Ant,
There is a white paper on implmenting neural networks in LabVIEW here. How are you planning on receiving the current inputs to use in LabVIEW? You will need to either use our cDAQ or cRIO platforms with analog input modules that have their own ADCs. I'm not sure what the best form factor would be for your application.
05-29-2015 01:15 PM
05-29-2015 02:27 PM
05-30-2015 04:13 AM
06-01-2015 09:38 AM
To answer your first question, you can change the value of the DC measurement in Simulate Signal by changing the Offset input.
Here is a link to the toolkit discussed in the white paper for LabVIEW 2012.
The FPGA IP Builder just adds additional FPGA-only functions that have been optimized to compile down to the most efficient HDL code.
I don't know of any kwta examples in LabVIEW that I can share with you.
Regarding your questions on coding for the FPGA, I'm not sure exactly what you're asking. I will tell you that coding for FPGA in LabVIEW is the same as coding for any other LabVIEW application. Everything follows the same rules (dataflow, etc.). Whatever you wire to the inputs of a function will be the inputs.
06-01-2015 05:12 PM
06-01-2015 08:39 PM
It sounds like a lot of this is over your head. You're going to struggle if you don't take some time to work with the basics.
You don't have any hardware. Why would you find any attached to your computer? If you want to add a target from the select a device option, you'll likely want to right-click on the project title, rather than "my computer" and find the new target there. You'll have a wider range of targets to work with as most are remote devices rather than being internal to your computer.
You haven't really been clear about your goal here. You can't use any hardware. But, you want to focus on FPGA programming. That will eventually require hardware. Even the best of simulations won't match the precise timing you'll experience with the hardware. At best, you'll be able to create a first step without hardware. Will you eventually get to use hardware? Is your hardware already chosen for you or can you use any hardware (Can you use the cRIO or are you limited to one of the sbRIO or R-series options)