currently I’am training my Word2Vec + LSTM for Twitter sentiment analysis. I use the pre-trained GoogleNewsVectorNegative300 word embedding. The reason I used the pre-trained GoogleNewsVectorNegative300 because the performance much worse when I trained my own Word2Vec using own dataset. The problem is why my training process had validation acc and loss stuck at 0.88 and 0.34 respectively. Then, my confussion
Tag: keras
Keras Confusion Matrix does not look right
I am running a Keras model on the Breast Cancer dataset. I got around 96% accuracy with it, but the confusion matrix is completely off. Here are the graphs: And here is my confusion matrix: The matrix is saying that I have no true negatives and they’re actually false negatives, when I believe that it’s the reverse. Another thing that
How to draw the precision-recall curve for a segmentation model?
I am using an U-Net for segmenting my data of interest. The masks are grayscale and of size (256,256,1). There are 80 images in the test set. The test images (X_ts) and their respective ground-truth masks (Y_ts) are constructed, saved, and loaded like this: The shape of Y_ts (ground truth) is therefore (80,256,256,1) and these are of type “Array of
How can i sum up multiple inputs in one when using a submodel?
I wrote a custom Tree-RNN-CELL that can handle several different inputs when they are provided as a tuple. This is working fine, but now I wanted to put it together in a submodel, so that i can sum the 4 lines up in 2 lines and to have a better overview ( the tree gets big so its worth it)
How to add a traditional classifier(SVM) to my CNN model
here’s my model i want to make svm classifier as my final classifier in this model so how can i do that? also another question i want to know the predicted class of a certain input so when i use it only gives me probabilities so how can i solve that too Answer You can use neural network as feature
Extracting first-layer weights from a multi-layer Keras NN and transferring them to a single layer NN
I trained a 3-hidden layer NN (3-HL) using Keras (with good results, and I wanted to extract the weights from its first layer (inputs to its first-hidden layer) and use them in a single-hidden layer NN (inputs to its single hidden layer), to train. The 3-HL model summary along with its extracted (hopefully first layer) weight dimensions is as follows:
Tensorflow: Incompatible shapes: [1,2] vs. [1,4,4,2048]
I have the following tensorflow model: I have simplified this somewhat in an attempt to narrow down the problem, When I run this I get the following error: This error always seems to occur on a different input image. All my images are exactly the same dimennsions. I am using tensorflow 2.4.1 What am I missing? Answer The ResNet50 model
Incompatibility between input and final Dense Layer (Value Error)
I’m following this tutorial from Nabeel Ahmed to create your own emotion detector using Keras (I’m a noob) and I’ve found a strange behaviour that I’d like to understand. The input data is a bunch of 48×48 images, each one with an integer value between 0 and 6 (each number stands for an emotion label), which represents the emotion present
Tensorflow MirroredStrategy halves the 2nd dimension, though shape in the object remains right
I’ve recently tried to use MirroredStrategy for training. The relevant code is: dataset print is: which is in the correct dimension, but I get the following error: which is odd, as the documentation says that the strategy will halve the first dimension not the second, it should split the dataset for 2, along the first axis. Does anyone know what
Fitting LSTM model
I am trying to fit LSTM model, but it gave me an error with the shape. my dataset has 218 rows and 16 features including the targeted one. I split the data, %80 for training and %20 for testing, after compiling the model and run it, i got this error: Variable definitions: batch_size = 160 epochs = 20 timesteps =