I keep getting F tensorflow/core/platform/default/env.cc:73] Check failed: ret == 0 (11 vs. 0)Thread tf_data_private_threadpool creation via pthread_create() failed. errors during training, although the machine is quite powerful: altogether 64 logical cores ulimit -s gives 32768, ulimit -u gives 1030608 I want to train the following network with a bunch of online generated 512*512 grayscale images along with two additional parameters
Tag: keras
Keras, simple neural network Error Code (model.predict)
Do any of you know why I get the following error code? My Code : You can ignore the Integrator Part, I just want to know why the model.predict wont work. Here is the error: Answer The problem is with the lines: Here your model is setup to receive a rank 2 tensor as input, but you are only giving
Mask layer is not working with MLPs, how to add a custom layer with masking?
I’m using MLPs to forecast a time series, I implement a code that contain a mask layer to let the model skip the mask values. for instance, in my data, the time series has a lot of NaN values, I fill it by a ‘value = -999’. I don’t want to remove it, but I want the Keras masking to
layer.get_weights() is not equal in the same model layers
Why not all the layer weights equal: Here is the output: The a_weights == b_weights are not all the “True”. What’s the problem? Answer Notice that the only time a_weights == b_weights is True, is when you are referencing a layer, which does not have any weights. np.array_equal is returning False because you are actually comparing lists of arrays and
Tensorflow Fused conv implementation does not support grouped convolutions
I did a neural network machine learning on colored images (3 channels). It worked but now I want to try to do it in grayscale to see if I can improve accuracy. Here is the code: You can see that I have changed the input_shape to have 1 single channel for grayscale. I’m getting an error: Node: ‘sequential_26/conv2d_68/Relu’ Fused conv
Prediction with keras embedding leads to indices not in list
I have a model that I trained with For the embedding I use Glove as a pre-trained embedding dictionary. Where I first build the tokenizer and text sequence with: t = Tokenizer() t.fit_on_texts(all_text) and then I’m calculating the embedding matrix with: now I’m using a new dataset for the prediction. This leads to an error: Node: ‘model/synopsis_embedd/embedding_lookup’ indices[38666,63] = 136482
ImportError: dlopen(…): Library not loaded: @rpath/_pywrap_tensorflow_internal.so
I am a beginner at machine learning. I try to use LSTM algorism but when I write from keras.models import Sequential it shows error as below: How can I fix this? Thank you so much! full error message: Answer Problem solved. install tensorflow again with and change the import to
Transfer Learning with Quantization Aware Training using Functional API
I have a model that I am using transfer learning for MobileNetV2 and I’d like to quantize it and compare the accuracy difference against a non-quantized model with transfer learning. However, they do not entirely support recursive quantization, but according to this, this method should quantize my model: https://github.com/tensorflow/model-optimization/issues/377#issuecomment-820948555 What I tried doing was: It is still giving me the
How to create tensorflow dataset from runtime generated images?
So, I start a small project based on tensorflow and could not understand how to prepare dataset generated from memory input. I have a random number of sources, that generate images. Than I pass it to python script. Images created as bytes array with png format. I collect images to array, and want to prepare dataset from it and train
Element wise multiplication of two list that are tf.Tensor in tensorflow
What is the fastest way to do an element-wise multiplication between a tensor and an array in Tensorflow 2? For example, if the tensor T (of type tf.Tensor) is: and we have an array a (of type np.array): I wand to have: as output. Answer This is called the outer product of two tensors. It’s easy to compute by taking