So, I start a small project based on tensorflow and could not understand how to prepare dataset generated from memory input. I have a random number of sources, that generate images. Than I pass it to python script. Images created as bytes array with png format. I collect images to array, and want to prepare dataset from it and train
Tag: tensorflow
Element wise multiplication of two list that are tf.Tensor in tensorflow
What is the fastest way to do an element-wise multiplication between a tensor and an array in Tensorflow 2? For example, if the tensor T (of type tf.Tensor) is: and we have an array a (of type np.array): I wand to have: as output. Answer This is called the outer product of two tensors. It’s easy to compute by taking
Synchronization for video/audio/text message in flask web app framework for facial emotion recognition
I have trained a CNN model in Google Colab for facial expression detection with the FER2013 dataset containing 7 emotion classes (‘Angry’, ‘Disgust’, ‘Fear’, ‘Happy’, ‘Sad’, ‘Surprise’, ‘Neutral’). Used flask framework to build a web application. OpenCV’s haarcascade_frontalface_default.xml is used to detect faces. With this I’m able to do real-time live streaming of the video using my laptop’s webcam and
An error occured to code like that “argument of type ‘method’ is not iterable”
I would like to predict future stock price and I tried to create calculate function but when I run code below I found an error. I am not sure if I missing (), or not. Could you please advice me? The above is the code and the below is the error shown as below: Answer The issue is with the
Tensorflow dataset, how to concatenate/repeat data within each batch?
If I have the following dataset: dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3, 4, 5, 6]) When I use a batch_size=2, I would get [[1,2], [3,4], [5,6]]. However, I would like to get the following output: [[1,2,1,2], [3,4,3,4], [5,6,5,6]] Basically, I want to repeat the batch dimension by 2x and use this as a new batch. Obviously, this is a toy example.
Tensorflow: `tf.reshape((), (0))` works fine in eager mode but ValueError in Graph mode
As the title, the function tf.reshape((), (0)) works perfectly fine in eager mode. But when I use it in Graph mode, it returns: ValueError: Shape must be rank 1 but is rank 0 for ‘{{node Reshape}} = Reshape[T=DT_FLOAT, Tshape=DT_INT32](Reshape/tensor, Reshape/shape)’ with input shapes: [0], []. Can anyone help me with the work-around of this function please. You can reproduce this
Why does Keras run only 5 epochs out of 25?
I have uninstalled Keras and Tensorflow and installed them both using But even after, I still have this strange thing that it’s only 5 epochs that are running: I cannot track when this situation occurred, but it used to run all of the epochs. Here is my code: I also use Please direct me. Answer Try replacing the steps_per_epoch =
Tensorflow: Issues with determining batch size in custom loss function during model fitting (batch size of “None”)
I’m trying to create a custom loss function, in which I have to slice the tensors multiple times. One example is listed below: This (and the entire loss function) works fine when testing it manually on selfmade Tensors y_true and y_pred, but when using it inside a loss function it will give an error upon model fitting (compiling goes fine).
Elegant way to plot average loss of many trains in tensorflow
I am running many iterations of a train so I can smooth out the loss curves. I would like an elegant way to average all the losses from history.history[‘loss’] but haven’t found an easy way to do it. Here’s a minimal example: If I wanted to plot just one example, I would do this: But instead, I want to average
Data type preference for training CNN?
I originally was using input data of int8 type ranging from 0-255 before learning that standardizing and normalizing should increase learning speeds and accuracy. I attempted both, with and without a mean of zero, and none of these methods improved learning speed or accuracy for my model relative to 0-255, int8 approach. I’m just wondering whether training with, for example,