Skip to content
Advertisement

Tag: keras

Word2Vec + LSTM Good Training and Validation but Poor on Test

currently I’am training my Word2Vec + LSTM for Twitter sentiment analysis. I use the pre-trained GoogleNewsVectorNegative300 word embedding. The reason I used the pre-trained GoogleNewsVectorNegative300 because the performance much worse when I trained my own Word2Vec using own dataset. The problem is why my training process had validation acc and loss stuck at 0.88 and 0.34 respectively. Then, my confussion

How do I read two folders in a directory and combine them under one label using flow_from_directory?

Tensorflow/Keras I want to classify images into either “Circle”, “Square” or “Triangle”. I have a directory containing 6 folders with each shape having a separate “shaded” or “unshaded” folder. How can I combine them into one category? For example: shaded and unshaded circles will be given a label “0” using flow_from_directory. I will then feed this into my CNN model

How to import utils from keras_unet

I’m trying to add utils from keras_unet in google colab, but I have a problem. keras-unet init: TF version is >= 2.0.0 – using tf.keras instead of Keras ModuleNotFoundError Answer You must install keras-unet before importing as follows Let us know if the issue still persists. Thanks!

How does Tokenizer in tensorflow deal with out of vocabulary tokens if I don’t provide oov_token?

I didn’t get any error with that code even though I didn’t provide oov_token argument. I expected to get an error in test_tweets = tokenizer.texts_to_sequences(X_test) How does tensorflow deal with out of vocabulary words during the test time when you don’t provide the oov_token? Answer OOV words will be ignored / discarded by default, if oov_token is None:

Compile model which has different dimensions of output and labels (in Tensorflow)

Simplest examples which replicates the error: I understand, that in this case, output of model is (batch_size, 10) while my labels have (batch_size,) dimensions. This is why I use tf.nn.sparse_softmax_cross_entropy_with_logits. Before I can provide any kind of labels to this model, compilation fails with the following error: After some investigation, I see that compilation fails because tensorflow somehow thinks that

Behavior of steps_per_epoch and validation_steps in Keras Model

I’m a little bit confused on the behavior of steps_per_epoch and validation_steps in the fit function. More particularly, if I set steps_per_epoch to be smaller than total_records/batch_size, would it be that a) the model only trains on the same subset of training data for every epoch or b) the model will use different training data for each epoch and will

Advertisement