This example demonstrates the use of the Model Importer API in the Vision Development Module to perform Object detection feature for Defect Inspection application using Deep Learning.
The example uses a pre-trained model – SSD_MobilenetV1 which is trained in TensorFlow. This model is loaded using the Model Importer VI to detect defects in the images.
The example has two controls:
If I read this correctly, the heavy lifting (training, configuration, etc) is done first in Python, then when we want to run the model, we can use this set of .vis to reload that model and classify images. Is that correct?
To classify is NI Vision running python to do this or is tensorflow being called directly somehow? Is this any better than trying to do the classification with some labview<->python bridge?
What are the Python, tensorflow, etc versions needed?
Yes, tensorflow is used to do the heavy lifting and Labview VIs do the model loading and inference.
The supported tensorflow version is 1.4. Python is not being used here for execution.
For deployment scenarios labvew<-->python bridge doesn't make life easier for the end user.
Python version would be whichever you use for your development.
Is the code for generating the tensorflow model available along with the training data set in order to help us setup & train our own models?