Skip to content
Advertisement

Easiest way to see the output of a hidden layer in Tensorflow/Keras?

I am working on a GAN and I’m trying to diagnose how any why mode collapse occurs. I want to be able to look “under the hood” and see what the outputs of various layers in the network look like for the last minibatch. I saw you can do something like model.layers[5].output, but this produces a tensor of shape [None, 64, 64, 512], which looks like an empty tensor and not the actual output from the previous run. My only other idea is to recompile a model that’s missing all the layers after the one I’m interested in and then run a minibatch through, but this seems like an extremely inefficient way to do it. I’m wondering if there’s an easier way. I want to run some statistics on layer outputs during the training process to see where things might be going wrong.

Advertisement

Answer

I did this for a GAN I was training myself. The method I used extends to both the generator (G) and discriminator (D) of a GAN.

The idea is to make a model with the same input as D or G, but with outputs according to each layer in the model that you require.

For me, I found it useful to check the activations. In Keras, with some model model (which will be D or G for you and me)

JavaScript

Now the rest is quite model-specific. This is the basic method for checking the outputs of the layers in a given model.

Note it can be done before, during or after training. It also does not need re-compiling.

The plotting of activation maps in this case is relatively straight forward and as you mentioned, you will probably have something specific you want to do. Still, I have to link this beautiful example here.

User contributions licensed under: CC BY-SA
5 People found this is helpful
Advertisement