04-04-2020 05:05 PM
We are starting to look into using machine learning for an image classification task. We currently use Labview to collect and process images using conventional machine vision techniques. We have started working on building Tensorflow trained models to work with LV Deep Learning vi's. We are currently working with Tensorflow 2.1 and when trying to import the SavedModel *.pb format I believe we are getting an error due to version incompatibility. We are using LV2018 at the moment.
Can anyone provide information as to what the latest version of Tensorflow that is currently supported in the latest version of Labview (2019/2020?) as this would be a reason for us to upgrade at this time. I am also having trouble finding any information that might show me how to save a Tensorflow model created using the 2.x tools in a lower version if needed (I know probably not likely but figured I would look and see if something like this exist)
Thanks
I apology we are not experts in Tensorflow/python programming compared to our experience with Labview so
01-15-2021 03:18 AM
Hi,
I'm also intersted in having the Labview Tensorflow Wrapper updated to 2.x and support GPU essentially for Real Time inferences. It's currently 1.4.1 and only CPU.
Does someone have any news about that ? It seems that Matlab has a better support for this library. It is a pity.
Thanks for your help
01-15-2021 04:50 AM
@Tda10 wrote:
. It's currently 1.4.1
It is possible to train a model in 2.x, export the trained model as frozen.pb and then load this frozen.pb in 1.x
frozen means, variables and weights in one file - there is also a saved.pb which does separate those two
but ...
@Tda10 wrote:
Labview Tensorflow is 1.41 CPU-only
yes, there is no GPU support in Labview Tensorflow IMAQ as it is conveniently done in Python Tensorflow.
for 2.0 there still exists the tensorflow-gpu==2.0 but in later versions there is only one tensorflow package, with integrated GPU support
01-15-2021 06:35 AM
Thanks for your answer.
The issue is that the frozen model seems to be deprecated in TF2.x and I get errors when trying to export the model as frozen. They suggest :
# Save a frozen model
myfunc_model = tf.function(model).get_concrete_function(tf.TensorSpec(model.inputs[0].shape, model.inputs[0].dtype))
mygraph_def = frozen_keras_graph(myfunc_model)
tf.io.write_graph(graph_def, '/tmp/tf_model', 'frozen_graph.pb')
-> Error : tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot convert a Tensor of dtype resource to a NumPy array
That's the reason why I asked for a more recent implementation fo TF2.x in Labview. This way, we could use the TF2.x saved_model.pb format directly.
01-20-2021 07:03 AM - edited 01-20-2021 07:04 AM
@Tda10 wrote:
The issue is that the frozen model seems to be deprecated in TF2.x
using sess is deprecated, but is still a valid way to access the actual Tensorflow graph:
https://www.tensorflow.org/api_docs/python/tf/io/write_graph
@Tda10 wrote:
Thanks for your answer.
The issue is that the frozen model seems to be deprecated in TF2.x and I get errors when trying to export the model as frozen. They suggest :
# Save a frozen model
myfunc_model = tf.function(model).get_concrete_function(tf.TensorSpec(model.inputs[0].shape, model.inputs[0].dtype))
mygraph_def = frozen_keras_graph(myfunc_model)
tf.io.write_graph(graph_def, '/tmp/tf_model', 'frozen_graph.pb')
-> Error : tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot convert a Tensor of dtype resource to a NumPy array
That's the reason why I asked for a more recent implementation fo TF2.x in Labview. This way, we could use the TF2.x saved_model.pb format directly.
you need another process step before you save to file, which is to convert the model's variables to constants
I learned this from this Blog post:
https://blog.csdn.net/huangcong159/article/details/106882311
01-21-2021 09:55 AM
Thanks again. I'm in progress.
Now I can load a frozen model (generated from TF2.x) in Labview with "IMAQ DL Create Model"
Nevertheless, for some operations I get errors. It seems that new attributes have been added that are not understood by the binary interpreter used by Labview. I still have the impression that an upgrade to TF2.x is needed on the Labview side.
For instance :
tf.keras.layers.MaxPooling2D(pool_size=(2, 2), padding="valid") has a new "explicit_paddings" attribute that I can't remove from the generated binary frozen file.
Error -1074395539 occurred at IMAQ DL Model Create
Error reported from TensorFlow
Error Code: 3
Description: NodeDef mentions attr 'explicit_paddings' not in Op<name=MaxPool; signature=input:T -> output:T; attr=T:type,default=DT_FLOAT,allowed=[DT_HALF, DT_BFLOAT16, DT_FLOAT, DT_DOUBLE, DT_INT32, DT_INT64, DT_UINT8, DT_INT16, DT_INT8, DT_UINT16, DT_QINT8]; attr=ksize:list(int),min=4; attr=strides:list(int),min=4; attr=padding:string,allowed=["SAME", "VALID"]; attr=data_format:string,default="NHWC",allowed=["NHWC", "NCHW", "NCHW_VECT_C"]>; NodeDef: {{node sequential_2/max_pooling2d/MaxPool}}. (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
Possible reason(s):
Unable to load the Model file.
01-22-2021 06:34 AM
Thanks for sharing -
I think the IMAQ deep learning API is meant to be used for object detection, not for classification.
Maybe this is limiting you right now ...
are you trying to run a simple Convolutional Neural Network?
something like this?
however,
someone successfully ran the infamous cats'n'dogs classification with the IMAQ deep learning tools, see the comments in:
https://forums.ni.com/t5/LabVIEW-Idea-Exchange/Tensorflow-API-for-LabVIEW/idi-p/3248737
especially the last comment:
07-09-2021 06:18 AM
How did you solve the problem, please? I have the same error and I'm using Labview2021(64 bit) python 3.6 tensorflow 2.5.0
07-09-2021 06:19 AM
How did you solve the problem please ??
07-14-2021 02:15 PM
@eya-nagazi wrote:
How did you solve the problem please ??i
As this is tensorflow-cpu only, it was way too slow, the last time I tried.
We also tried tensorflow-gpu via the python node, but the data transfer was too slow ... this may change with LabView 2021 introducing python 3.9
so, we still use LabView (of course...) but externalised the neural net stuff on a separate hardware, using ethernet to communicate between LabView and Python