LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

LabVIEW Analytics and Machine Learning Toolkit (2018)

Solved!
Go to solution

I developed several LabVIEW applications aimed for the acquisition, processing and classification of the raw electroencephalographic signal provided by NeuroSky Mindwave Mobile portable headset. I implemented an algorithm for the generation of various training and testing datasets based on multiple mixtures between EEG rhythms (alpha, beta, gamma, delta, theta) and statistical features (mean, median, RMS, maximum, mode). The aim is to detect the voluntary eye-blinks by using a neural networks (NN) based model provided by LabVIEW toolkit - Analytics and Machine Learning (AML). There resulted two classes: 0 - No Eye-Blink Detected and 1 - Voluntary Eye-blink Detected. I encountered an issue and I am interested to find out if you have also noticed it or if this kind of behavior is all right. The metrics (accuracy, precision, recall, F1 score) got from the training of NN model are different from the metrics got from the evaluation of the NN model. I should emphasize that I loaded a similar dataset both for training the NN model and evaluation the same NN model. After the model training process is finished, the Accuracy is equal to 0.9833. After the model evaluation phase is done, the Accuracy is equal to 1. Is this an acceptable situation? Taking into account that I have used the same training dataset and I uploaded the previously trained NN model, I expected to get a similar result (Accuracy = 0.9833) when I evaluate the NN model. Thank you very much.

Oana_R
0 Kudos
Message 1 of 3
(2,488 Views)

Hi, 

 

I am working on the classification of ECG signals using NN. I have also used the same dataset for training and testing. Could you share your vi? 

 

Thank you,

Deepa

0 Kudos
Message 2 of 3
(2,358 Views)
Solution
Accepted by topic author OanaAndry21

Hi,

 

I have used the VI given in the official Examples of the LabVIEW Analytics and Machine Learning (AML) Toolkit. You can download it from this link: https://www.ni.com/en/support/downloads/software-products/download.labview-analytics-and-machine-lea...

 

From my point of view, the most useful example is the - Classification / Search Optimized Parameters.

 

Meanwhile, I noticed that it is not possible to get the same accuracy both at the training and evaluation if I load the same dataset. In the training process, the dataset is divided into three sections. For example, if the dataset is composed of 6000 samples, only 2000 samples are used for testing the Artificial Neural Networks (ANN) model. Therefore, in the training process, the resulted accuracy is related to testing the ANN model only on those 2000 samples. I wonder if it is possible to identify these 2000 samples used for testing purposes.

 

On the other hand, in the evaluation process, a different accuracy is reported because it is related to testing the ANN model on the entire dataset including all the 6000 samples. 

 

All the best,

Oana 

Oana_R
0 Kudos
Message 3 of 3
(2,328 Views)