Skip to content
Advertisement

Import onnx models to tensorflow2.x?

I created a modified lenet model using tensorflow that looks like this:

img_height = img_width = 64
BS = 32

model = models.Sequential()
model.add(layers.InputLayer((img_height,img_width,1), batch_size=BS))
model.add(layers.Conv2D(filters=32, kernel_size=(3, 3), strides=(1, 1), batch_size=BS, activation='relu', padding="valid"))
model.add(layers.Conv2D(filters=64, kernel_size=(3, 3), strides=(1, 1), batch_size=BS, activation='relu', padding='valid'))
model.add(layers.MaxPooling2D(pool_size=(2, 2), strides=(2, 2), batch_size=BS, padding='valid'))
model.add(layers.Dropout(0.25))
model.add(layers.Conv2D(filters=128, kernel_size=(1,1), strides=(1,1), batch_size=BS, activation='relu', padding='valid'))
model.add(layers.Dropout(0.5))
model.add(layers.Conv2D(filters=2, kernel_size=(1,1), strides=(1,1), batch_size=BS, activation='relu', padding='valid'))
model.add(layers.GlobalAveragePooling2D())
model.add(layers.Activation('softmax'))
model.summary()

When I finish training I save the model using tf.keras.models.save_model :

num = time.time()
tf.keras.models.save_model(model,'./saved_models/' + str(num) + '/')

Then I transform this model into onnx format using “tf2onnx” module:

! python -m tf2onnx.convert --saved-model saved_models/1645088924.84102/ --output 1645088924.84102.onnx

I want a method that can retrieve the same model into tensorflow2.x. I tried to use “onnx_tf” to transform the onnx model into tensorflow .pb model:

import onnx

from onnx_tf.backend import prepare

onnx_model = onnx.load("1645088924.84102.onnx")  # load onnx model
tf_rep = prepare(onnx_model)  # prepare tf representation

But this method generates a .pb file only, but the load_model method in tensorflow2.x requires two additional folders in the same directory as the .pb file which are named as “variables” and “assets”.

If there is a way to make the .pb file work as if it has the “assets” and “variables” folders, or if there is a method that can generate complete model from onnx, either solutions would be appreciated.

I’m using a jupyter hub server, and everything is inside anaconda environment.

Advertisement

Answer

As it turns out, the easiest method to do that is what Tensorflow Support suggested in the comment on the original post, which is to convert the .pb file back to .h5, and then reuse the model. For inferencing, we can use graph_def and concrete_function.

Converting .pb to .h5 : How to convert .pb file to .h5. (Tensorflow model to keras)

For inferencing: https://leimao.github.io/blog/Save-Load-Inference-From-TF2-Frozen-Graph/

User contributions licensed under: CC BY-SA
10 People found this is helpful
Advertisement