LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

How implement analog sorting?

I need to wright the student work on analog sorting of sygnals using neural netwoks in Labview, with application of fpga. How can I do it. Especially how to use analog sygnals sorting with probably binary schematics. I have the approximate scheme, where I have such elements as:

Adder (just 3-4 inputs), integrator (integrating function), multiplier functions, step function (switches probably). And of course the sources of electrical permanent sygnals on input and I have to produce this signals for output in descending order. How to use such design in Labview, what are just binary and how to be with analog-binary transformation irrespectably of fpga. But what is the role of FPGA here, especially how I can apply FPGA module -- I would be glad to know as I have some basic knowledge in verilog programming. The initial question (I was nor got the answer) is how many signals could be as the minimum to have this design appropriate for some study or research purposes, as the one adder make the sum from all inputs then passed through the step function element to this one adder (summator)?

Would you make any suggestions or references on similar projects, especially on the complexity of such scheme?

0 Kudos
Message 1 of 33
(3,878 Views)

Ant,

 

There was a lot in your post but I will try to answer some of the bigger questions as I see them. First, it seems that you are newer to LabVIEW FPGA.  If you are new to LabVIEW in general I would suggest going through the 3 or 6-hour tutorial but if you are familiar with LabVIEW but not FPGA start here.

 

http://www.ni.com/tutorial/14532/en/

 

I'm not sure exactly what you mean by analog-binary transormation, but the number to boolean array function might be what you are looking for.

 

http://zone.ni.com/reference/en-XX/help/371361L-01/glang/number_to_boolean_array/

 

The last bit is a bit complicated because you have to remember that you are the expert of your own application.  We are not sure what you are doing for research so it is really hard to make suggestions on what you should be doing.  One general thing I would suggest is to figure out your algorithms on paper first, then transition that logic to LabVIEW running on your computer then move over to FPGA.  As you get more comfortable with LabVIEW it will be easier to go right from paper to FPGA.

Matt J | National Instruments | CLA
0 Kudos
Message 2 of 33
(3,814 Views)

Yes I am aware of Labview basics and something more. But the link for tuttorial FPGA labview is not very convenient as it is with embedded videos. I would like to use just text, or at least the videos that I would be able to download from youtube or so.

The main idea here is that it is kind of k(wta) alorithm, despite ther is probably enough transformation of it. So I do not know should I use some DAQ here as it should be simulation (without any chip). Theoretically it should be different electrical   sygnals from direct current sourse that I should sort according to their voltage. But how should the output look like, that also should show timing characteristics, that is its efficiency. So here I do not know how transform from for example 7 V (or 70 V) to binary, or it should use some bit wise technique then if the bit countb is the same, check the bit 0 or 1, if not -- the count of it. Except it --just yesterday i began to read about DAQ--and it is told that it relates to analog binary transform --but without any device (then without DAQ)- -I need to simulate the different voltage in some way??Despite there is need to define first winner so then use feedback. And why FPGA? Maybe it should be addirional way to check this algorithm?  Additionally it is in neural network context -- could you refer to appicable resources to such kind of task?? I found some toolkit here but it is in labview 14--can you transform in 12 version one??

0 Kudos
Message 3 of 33
(3,781 Views)

Ant,

 

There is a white paper on implmenting neural networks in LabVIEW here. How are you planning on receiving the current inputs to use in LabVIEW? You will need to either use our cDAQ or cRIO platforms with analog input modules that have their own ADCs. I'm not sure what the best form factor would be for your application.

0 Kudos
Message 4 of 33
(3,737 Views)
No I need probably just simulate the sorting as I would not be able to use any device. So as I was consulted it should be simple numbers (5-6)[despite I saw by myself about function of lab-view --simulate signal--but DC signal is created by default 0 voltage amplitude--so i do not know how change it to some level]. It should be implemented with 5-8 of iterations of learning of feed-forward neural network.
0 Kudos
Message 5 of 33
(3,723 Views)
Can anyone convert such neural toolkit for me for 12 version, And send me at least in private message?
0 Kudos
Message 6 of 33
(3,713 Views)
Really. Could anybody suggest how to use here the fpga. I was instructed to download and bed ip fpga builer. But is that all? How to use it? If irrespectably of fpga. I do know that just to sum I need use add(sudstract) function, as multiply for multiplicator, step function for nodes of this network, integrating function, but if just use simple numbers -- how it would be connected to hardware how we can use time analysis. Could anybody refer to kwta in labview? But if use simulated signals do add function for example add function use the amplitude as its input? Eventually FPGA is used in binary logic--so how do I apply it here?
0 Kudos
Message 7 of 33
(3,687 Views)

To answer your first question, you can change the value of the DC measurement in Simulate Signal by changing the Offset input.

 

Here is a link to the toolkit discussed in the white paper for LabVIEW 2012.

 

The FPGA IP Builder just adds additional FPGA-only functions that have been optimized to compile down to the most efficient HDL code. 

 

I don't know of any kwta examples in LabVIEW that I can share with you.


Regarding your questions on coding for the FPGA, I'm not sure exactly what you're asking. I will tell you that coding for FPGA in LabVIEW is the same as coding for any other LabVIEW application. Everything follows the same rules (dataflow, etc.). Whatever you wire to the inputs of a function will be the inputs.

0 Kudos
Message 8 of 33
(3,615 Views)
http://zone.ni.com/reference/en-XX/help/371599H-01/lvfpgahelp/adding_an_fpga_target/ -- following it I could not add Target I do not see any target in new project--my computer. Anyway what module should I download to use FPGA – I was instructed to download IP FPGA builder – but how should I use it despite adding installing it, and what functionality it has? And I do not know the possibilities of FPGA in Labview – do it use for example the add, multiply fucntions as other functions in Labview or I should turn the non-binary numbers in binary ines and then use the logic function of OR, AND, XOR to find the multiplication, adding, integration, and comparation. What about the neural networks – they should be by using the back-propagation method so the example of such kind would be interesting. For example I have 5 numbers (signals) that is in arbitary order. But I need to sort it in such way: f.i. I have 5, 7, 3, 11, 6, 9. the main difficulties is to find such x = [between 3 and 11] that would made such function E – minimal, that is equating to 0. K is the number of biggest numbers or winners among them; K<5. E= K – Sum of Step functions of (5-x; 7-x; 3-x...) If 5-x >0 it made step function 1, otherwise – 0. And so on. But I do not know should I use rectangular function or sigmoid function here, and what type of x should be – 3, 4, 5 or any real number such as 3,11, and so on. After finding it I would be able to find for example the 1 winner (k=1), 2 winners (k=2). Then I would be able to probably substract second from first array, third from second array (despite i do not really sure is it possible define for example a2, by simply having a2, a1, despite also unordered and a1, that differs by one component). So I need some very compact back-propagation algorithm, not exctly in labview, where I would made N input nodes, K –output nodes, and unknown quantity of hidden(middle) layer – but also probably N ones. So I should use N inputs multiplied on N weights (0;1). The desirerable outputs should be 11, 9. 7 if K=3. But the main maipulation depends on minimization of E= K – Sum of binary Step functions – if it is higher then 0 I need to minimize, if less – maximize so it is like the increasing/decresing the weight. So I do now what to do exacly. Moreover the schematic has the integrating functions probbaly for determining but not discretly X. So here is probably two ways to find this optimum X. To use just defined inputs, outputs and arbitrary defined weights, and to follow the schematics. So how to join this to approachs in neural network design?
0 Kudos
Message 9 of 33
(3,582 Views)

It sounds like a lot of this is over your head.  You're going to struggle if you don't take some time to work with the basics.

 

You don't have any hardware.  Why would you find any attached to your computer?  If you want to add a target from the select a device option, you'll likely want to right-click on the project title, rather than "my computer" and find the new target there.  You'll have a wider range of targets to work with as most are remote devices rather than being internal to your computer.

 

You haven't really been clear about your goal here.  You can't use any hardware.  But, you want to focus on FPGA programming.  That will eventually require hardware.  Even the best of simulations won't match the precise timing you'll experience with the hardware.  At best, you'll be able to create a first step without hardware.  Will you eventually get to use hardware?  Is your hardware already chosen for you or can you use any hardware (Can you use the cRIO or are you limited to one of the sbRIO or R-series options)

0 Kudos
Message 10 of 33
(3,568 Views)