Multiclassification task using keras [closed]

Closed. This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Update the question so it focuses on one problem only by editing this post. Closed last month. Improve this question Classification (not detection!) of several objects in one image is the problem. How can I do this using keras. For example if I have 6 classes (dogs,cats,birds,…) and two different objects (a cat and a bird) in this image. The label would be of the form: [0,1,1,0,0,0] Which metric, loss function and optimizer is recommended? I would like to use CNN.

validation accuracy not improving

No matter how many epochs I use or change learning rate, my validation accuracy only remains in 50’s. Im using 1 dropout layer right now and if I use 2 dropout layers, my max train accuracy is 40% with 59% validation accuracy. And currently with 1 dropout layer, here’s my results: Again max, it can reach is 59%. Here’s the graph obtained: No matter how much changes I make, the validation accuracy reaches max 59%. Here’s my code: Im very confused why only my training accuracy is updating, not the validation accuracy. Here’s the model summary: Answer The size of

In Keras, can I use an arbitrary algorithm as a loss function for a network?

I has been trying to understand this machine learning problem for many days now and it really confuses me, I need some help. I am trying to train a neural network whose input is an image, and which generates another image as output (it is not a very large image, it is 8×8 pixels). And I have an arbitrary fancy_algorithm() “black box” function that receives the input and prediction of the network (the two images) and outputs a float number that tells how good the output of the network was (calculates a loss). My problem is that I want to

Neural Network Results always the same

Edit: For anyone interested. I made it slight better. I used L2 regularizer=0.0001, I added two more dense layers with 3 and 5 nodes with no activation functions. Added doupout=0.1 for the 2nd and 3rd GRU layers.Reduced batch size to 1000 and also set loss function to mae Important note: I discovered that my TEST dataframe wwas extremely small compared to the train one and that is the main Reason it gave me very bad results. I have a GRU model which has 12 features as inputs and I’m trying to predict output power. I really do not understand though

keras lstm error: expected to see 1 array

so i want to make a lstm network to run on my data but i get this message: ValueError: Error when checking input: expected lstm_1_input to have shape (None, 1) but got array with shape (1, 557) this is my code: Answer You need to change the input_shape value for LSTM layer. Also, x_train must have the following shape. So, change to

Why is TensorFlow 2 much slower than TensorFlow 1?

It’s been cited by many users as the reason for switching to Pytorch, but I’ve yet to find a justification / explanation for sacrificing the most important practical quality, speed, for eager execution. Below is code benchmarking performance, TF1 vs. TF2 – with TF1 running anywhere from 47% to 276% faster. My question is: what is it, at the graph or hardware level, that yields such a significant slowdown? Looking for a detailed answer – am already familiar with broad concepts. Relevant Git Specs: CUDA 10.0.130, cuDNN 7.4.2, Python 3.7.4, Windows 10, GTX 1070 Benchmark results: UPDATE: Disabling Eager Execution

How to load models trained on GPU into CPU (system) memory?

I trained a model on a GPU and now I am trying to evaluate it on CPU (GPU is being used for a different train run). However when I try to load it using: I am getting a CUDA_ERROR_OUT_OF_MEMORY: (I tried also setting compile=True with the same result.) It seems that the model is being loaded into GPU which is already used by another instance. How can I force keras/tensorflow to load it into system memory and execute it on CPU? Answer Can you define everything inside with tf.device(‘/cpu:0′): except the libraries import part and test. If that doesn’t work

Concatenate two models with tensorflow.keras

I’m currently studying neural network models for image analysis, with the MNIST dataset. I first used only the image to build a first model. Then I created a additionnal variable, which is : 0 when the digit is actually between 0 and 4, and 1 when it’s greater or equal than 5. Therefore, I want to build a model that can take these two informations : the image of the digit, and that additionnal variable I juste created. I created the two first models, one for the image and one for the exogenous variable, as follow : Then I would

During creating VAE model throws exception “you should implement a `call` method.”

I want to create VAE(variational autoencoder). During model creating it throws exception. When subclassing the Model class, you should implement a call method. I am using Tensorflow 2.0 Models with names I want to get model. Answer The problem is here: You are passing three arguments to the construction, where only two are needed (inputs and outputs). Models do not have names. The problem is that three parameters will break the detection of network or sub-classed model as shown in the keras source code. So just replace the code with:

Model not training and negative loss when whitening input data

I am doing segmentation and my dataset is kinda small (1840 images) so I would like to use data-augmentation. I am using the generator provided in the keras documentation which yield a tuple with a batch of images and corresponding masks that got augmented the same way. I am then training my model with this generator : But by using this I get negative loss and the model is not training: I also want to add that the model is training if I don’t use featurewise_center and featurewise_std_normalization. But I am using a model with batch normalization that performs way