I am beginning with image classification using keras. Tried a simple minst dataset for detecting numbers in images. Ran the model. However I wanted to test the model on my own dataset and facing some problem. Error: WARNING:tensorflow:Model was constructed with shape (None, 28, 28) for input KerasTensor(type_spec=TensorSpec(shape=(None, 28, 28), dtype=tf.float32, name=’flatten_input’), name=’flatten_input’, description=”created by layer ‘flatten_input'”), but it was
Tag: tensorflow
InvalidArgumentError training multivariate LSTM autoencoder
I tried to do experiments in different datasets using this model, it works fine for univariate time series. However, I get an issue when trying to do it for multivariate time series and I think it’s due to Time Distributed layer but I am not sure. I tried to read different posts about the same question with no luck. trainx
Keras – Hyper Tuning the initial state of the model
I’ve written an LSTM model that predicts the sequential data. I’ve tuned some of the layer’s params using AWS SageMaker. While validating the model I’ve run a model with a specific configuration several times. Most of the time the results are similar, however, one run was much better than others, which led me to think that the initial state of
How tensorflow initialize weight values in my neural network
i found that by default keras use Glorot/Xavier to initialize weight, this means that the values will be between +- (sqrt(6 / float(F_in + F_out)) But in my case i use the architecture below, with ishape = (None, 4): i don’t use fixed input size. ( My input data is a DNA sequence in one hot encoding) How keras initialize
“Could not interpret activation function identifier: 256” error in Keras
I’m trying to run the following code but I got an error. Did I miss something in the codes? and here is the error message: Answer This error indicates that, you have defined an activation function that is not interpretable. In your definition of a dense layer you have passed two argument as layers[i] and layers[i+1]. Based on the docs
How to make a custom activation function with trainable parameters in Tensorflow [closed]
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers. This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question
How can I see the model as visualized?
I am trying to do some sample code of GAN, here comes the generator. I want to see the visualized model but, this is not the model. Model.summary() is not the function of tensorflow but it is keras?? if so how can I see visualized model?? My function is here. Answer One possible solution (or an idea) is to wrap
Last layer in a RNN – Dense, LSTM, GRU…?
I know you can use different types of layers in an RNN architecture in Keras, depending on the type of problem you have. What I’m referring to is for example layers.SimpleRNN, layers.LSTM or layers.GRU. So let’s say we have (with the functional API in Keras): Where lstm_3 is the last layer. Does it make sense to have it as an
PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object
I am trying to train my model (Image classification) using Tensorflow. I keep getting an error when I try to run the following cell: Error is: I tried changing from loss=’categorical_crossentropy’ to loss=’binary_crossentropy’ but still the issue persists. I wish to train the model but the Epoch keeps getting stuck. Edit: The train generator function and where it is used
How do I run a video in a PyQt container?
In the QVideoWidget container on PyQt, you need to start a video from the computer, on which objects are to be searched through TensorFlow (openCV, cv2). The problem is that when the button is pressed, the video only shows one frame and nothing else. What could be the problem? Made in PyCharm, Python 3.7. Answer All problem is because you