I got an Xception model. I have combined the model to change the Input Channel to 3. however I have got error Answer You simply have to embed Xception in the correct way in your new model: We create a new Input layer, than we operate upsampling and in the end we pass all to Xception Here is the running
Tag: keras
Keras model fits on data with the wrong shape
I’ve created the following model: and the following dummy data: with the shapes of (4, None, 2) and (4, 3). Looking at the model structure one can see that the model has 3 outputs of shape (None, 1). I was wondering how come the fit works, when I expected they to be of shape (4, 3, 1) and not (4,
tf.keras.BatchNormalization giving unexpected output
The output of the above code (in Tensorflow 1.15) is: My problem is why the same function is giving completely different outputs. I also played with some of the parameters of the functions but the result was the same. For me, the second output is what I want. Also, pytorch’s batchnorm also gives the same output as second one. So
Adaptation module design for stacking two CNNs
I’m trying to stack two different CNNs using an adaptation module to bridge them, but I’m having a hard time determining the adaption module’s layer hyperparameters correctly. To be more precise, I would like to train the adaptation module to bridge two convolutional layers: Layer A with output shape: (29,29,256) Layer B with input shape: (8,8,384) So, after Layer A,
Keras flatten: ValueError: Attempt to convert a value (None) with an unsupported type () to a Tensor
I have the error mentioned in the title, with the following code This sends the following error According to the question asked with the same error it happens when you mix up keras and tf.keras. But i think have defined the imports accordingly, so unless there is a clash between imports or a bad definition of them i do not
Keras: Does model.predict() require normalized data if I train the model with normalized data?
After completing model training using Keras I am trying to use Keras’ model.predict() in order to test the model on novel inputs. When I trained the model, I normalized my training data with Scikit Learn’s MinMaxScaler(). Do I need to normalize the data as well when using model.predict()? If so, how do I do it? Answer Yes. You need. Because
InvalidArgumentError training multivariate LSTM autoencoder
I tried to do experiments in different datasets using this model, it works fine for univariate time series. However, I get an issue when trying to do it for multivariate time series and I think it’s due to Time Distributed layer but I am not sure. I tried to read different posts about the same question with no luck. trainx
Keras – Hyper Tuning the initial state of the model
I’ve written an LSTM model that predicts the sequential data. I’ve tuned some of the layer’s params using AWS SageMaker. While validating the model I’ve run a model with a specific configuration several times. Most of the time the results are similar, however, one run was much better than others, which led me to think that the initial state of
How tensorflow initialize weight values in my neural network
i found that by default keras use Glorot/Xavier to initialize weight, this means that the values will be between +- (sqrt(6 / float(F_in + F_out)) But in my case i use the architecture below, with ishape = (None, 4): i don’t use fixed input size. ( My input data is a DNA sequence in one hot encoding) How keras initialize
Evaluating DenseNet model in Keras with weighted classes
I am doing a binary classification in Keras, using DenseNet. Created weighted classes: As a result, I have I fitted the model with class_weight But when I want to evaluate the model, I am not sure how to evaluate the weighted model, because the class_weight is a part of the history. How to update this code, using instead of default