Skip to content
Advertisement

How can I see the model as visualized?

I am trying to do some sample code of GAN, here comes the generator.

I want to see the visualized model but, this is not the model.

Model.summary() is not the function of but it is ?? if so how can I see visualized model??

g_model = generator(input_z, output_channel_dim)
g_model.summary() // AttributeError: 'Tensor' object has no attribute 'summary'

My function is here.

def generator(z, output_channel_dim, is_train=True):

    with tf.variable_scope("generator", reuse= not is_train):
        
        # First FC layer --> 8x8x1024
        fc1 = tf.layers.dense(z, 8*8*1024)
        
        # Reshape it
        fc1 = tf.reshape(fc1, (-1, 8, 8, 1024))
        
        # Leaky ReLU
        fc1 = tf.nn.leaky_relu(fc1, alpha=alpha)

        
        # Transposed conv 1 --> BatchNorm --> LeakyReLU
        # 8x8x1024 --> 16x16x512
        trans_conv1 = tf.layers.conv2d_transpose(inputs = fc1,
                                  filters = 512,
                                  kernel_size = [5,5],
                                  strides = [2,2],
                                  padding = "SAME",
                                kernel_initializer=tf.truncated_normal_initializer(stddev=0.02),
                                name="trans_conv1")
        
        batch_trans_conv1 = tf.layers.batch_normalization(inputs = trans_conv1, training=is_train, epsilon=1e-5, name="batch_trans_conv1")
       
        trans_conv1_out = tf.nn.leaky_relu(batch_trans_conv1, alpha=alpha, name="trans_conv1_out")
        
        
        # Transposed conv 2 --> BatchNorm --> LeakyReLU
        # 16x16x512 --> 32x32x256
        trans_conv2 = tf.layers.conv2d_transpose(inputs = trans_conv1_out,
                                  filters = 256,
                                  kernel_size = [5,5],
                                  strides = [2,2],
                                  padding = "SAME",
                                kernel_initializer=tf.truncated_normal_initializer(stddev=0.02),
                                name="trans_conv2")
        
        batch_trans_conv2 = tf.layers.batch_normalization(inputs = trans_conv2, training=is_train, epsilon=1e-5, name="batch_trans_conv2")
       
        trans_conv2_out = tf.nn.leaky_relu(batch_trans_conv2, alpha=alpha, name="trans_conv2_out")
        
        
        # Transposed conv 3 --> BatchNorm --> LeakyReLU
        # 32x32x256 --> 64x64x128
        trans_conv3 = tf.layers.conv2d_transpose(inputs = trans_conv2_out,
                                  filters = 128,
                                  kernel_size = [5,5],
                                  strides = [2,2],
                                  padding = "SAME",
                                kernel_initializer=tf.truncated_normal_initializer(stddev=0.02),
                                name="trans_conv3")
        
        batch_trans_conv3 = tf.layers.batch_normalization(inputs = trans_conv3, training=is_train, epsilon=1e-5, name="batch_trans_conv3")
       
        trans_conv3_out = tf.nn.leaky_relu(batch_trans_conv3, alpha=alpha, name="trans_conv3_out")

        
        # Transposed conv 4 --> BatchNorm --> LeakyReLU
        # 64x64x128 --> 128x128x64
        trans_conv4 = tf.layers.conv2d_transpose(inputs = trans_conv3_out,
                                  filters = 64,
                                  kernel_size = [5,5],
                                  strides = [2,2],
                                  padding = "SAME",
                                kernel_initializer=tf.truncated_normal_initializer(stddev=0.02),
                                name="trans_conv4")
        
        batch_trans_conv4 = tf.layers.batch_normalization(inputs = trans_conv4, training=is_train, epsilon=1e-5, name="batch_trans_conv4")
       
        trans_conv4_out = tf.nn.leaky_relu(batch_trans_conv4, alpha=alpha, name="trans_conv4_out")

        
        # Transposed conv 5 --> tanh
        # 128x128x64 --> 128x128x3
        logits = tf.layers.conv2d_transpose(inputs = trans_conv4_out,
                                  filters = 3,
                                  kernel_size = [5,5],
                                  strides = [1,1],
                                  padding = "SAME",
                                kernel_initializer=tf.truncated_normal_initializer(stddev=0.02),
                                name="logits")
         
        out = tf.tanh(logits, name="out")
        
        return out

Advertisement

Answer

One possible solution (or an idea) is to wrap your operation into the Lambda layer and use it to build the model. Something like

# add a x -> x^2 layer
model.add(Lambda(lambda x: x ** 2))

But it’s not the general solution (AFAIK), and also while it is possible to use Variables with Lambda layers, this practice is discouraged as it can easily lead to bugs. And that exactly what we’ve encountered when trying to wrap your generator function into the Lambda layer. Here what we’ve tried and received:

import tensorflow as tf 
tf.__version__ # 2.5

def generator(z, output_channel_dim=10, is_train=True, alpha = 0.1):
    with tf.compat.v1.variable_scope("generator", reuse=tf.compat.v1.AUTO_REUSE):
        # First FC layer --> 8x8x1024
        fc1 = tf.compat.v1.layers.dense(z, 8*8*1024)
    ......
    ......
from tensorflow import keras 
from tensorflow.keras.layers import Lambda

generator(tf.ones((1, 8, 8, 1)), is_train=True).shape
TensorShape([64, 128, 128, 3])

Wrapping to the Lambda layer

For that, we received such a warning, a potential bug.

x = keras.Input(shape=(8, 8, 1))
y = Lambda(generator)(x)
WARNING:tensorflow:
The following Variables were used a Lambda layer's call (lambda_1), but
are not present in its tracked objects:
  <tf.Variable 'generator/dense/kernel:0' shape=(1, 65536) dtype=float32>
  <tf.Variable 'generator/dense/bias:0' shape=(65536,) dtype=float32>
  <tf.Variable 'generator/trans_conv1/kernel:0' shape=(5, 5, 512, 1024) dtype=float32>
  <tf.Variable 'generator/trans_conv1/bias:0' shape=(512,) dtype=float32>
.....
.....
from tensorflow.keras import Model 
model = Model(inputs=x, outputs=y)
model.summary()

Model: "model"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_3 (InputLayer)         [(None, 8, 8, 1)]         0         
_________________________________________________________________
lambda_2 (Lambda)            (None, 128, 128, 3)       0         
=================================================================
Total params: 0
Trainable params: 0
Non-trainable params: 0
_________________________________________________________________

tf.Variables are not present in the tracked objects.

User contributions licensed under: CC BY-SA
9 People found this is helpful
Advertisement