I made simple dataset like below. And I slice it by using from_tensor_slices: (I don’t know exact role of tensor slice function…) when I print dataset using print function, it shows like below: and when I print it using for loop it show like below: Here is question: In my idea, tensor shape should be (4,2) and (4,1) because row
Tag: tensorflow
I would try change channel for keras pretrained model
I got an Xception model. I have combined the model to change the Input Channel to 3. however I have got error Answer You simply have to embed Xception in the correct way in your new model: We create a new Input layer, than we operate upsampling and in the end we pass all to Xception Here is the running
Keras model fits on data with the wrong shape
I’ve created the following model: and the following dummy data: with the shapes of (4, None, 2) and (4, 3). Looking at the model structure one can see that the model has 3 outputs of shape (None, 1). I was wondering how come the fit works, when I expected they to be of shape (4, 3, 1) and not (4,
tf.keras.BatchNormalization giving unexpected output
The output of the above code (in Tensorflow 1.15) is: My problem is why the same function is giving completely different outputs. I also played with some of the parameters of the functions but the result was the same. For me, the second output is what I want. Also, pytorch’s batchnorm also gives the same output as second one. So
Adaptation module design for stacking two CNNs
I’m trying to stack two different CNNs using an adaptation module to bridge them, but I’m having a hard time determining the adaption module’s layer hyperparameters correctly. To be more precise, I would like to train the adaptation module to bridge two convolutional layers: Layer A with output shape: (29,29,256) Layer B with input shape: (8,8,384) So, after Layer A,
TensorFlow libdevice not found. Why is it not found in the searched path?
Win 10 64-bit 21H1; TF2.5, CUDA 11 installed in environment (Python 3.9.5 Xeus) I am not the only one seeing this error; see also (unanswered) here and here. The issue is obscure and the proposed resolutions are unclear/don’t seem to work (see e.g. here) Issue Using the TF Linear_Mixed_Effects_Models.ipynb example (download from TensorFlow github here) execution reaches the point of
Keras flatten: ValueError: Attempt to convert a value (None) with an unsupported type () to a Tensor
I have the error mentioned in the title, with the following code This sends the following error According to the question asked with the same error it happens when you mix up keras and tf.keras. But i think have defined the imports accordingly, so unless there is a clash between imports or a bad definition of them i do not
Error in implementing autokeras timeseries model
I was trying to implement autokeras TimeSeriesForecaster on a serial dataset. The features and label of the dataset are respectively given below. df1_x = AutoML preparation The dataframe has no NaN values, the shape of the features dataframe is (7111, 8) i.e. a 2D dataframe. But the error came as following: Answer You need to provide validation data to the
Unknown image file format. One of JPEG, PNG, GIF, BMP required
I built a simple CNN model and it raised below errors: The code I wrote is quite simple and standard. Most of them are just directly copied from the official website. It raised this error before the first epoch finish. I am pretty sure that the images are all png files. The train folder does not contain anything like text,
Keras: Does model.predict() require normalized data if I train the model with normalized data?
After completing model training using Keras I am trying to use Keras’ model.predict() in order to test the model on novel inputs. When I trained the model, I normalized my training data with Scikit Learn’s MinMaxScaler(). Do I need to normalize the data as well when using model.predict()? If so, how do I do it? Answer Yes. You need. Because