I’d like to test inference on a TensorFlow Lite model I’ve loaded into an Android project. I have some inputs generated in a Python environment I’d like to save to a file, load into my Android app and use for TFLite inference. My inputs are somewhat large, one example is: <class ‘numpy.ndarray’>, dtype: float32, shape: (1, 596, 80) I need
Tag: tensorflow
Neural Network loss is significantly changing for same set of weights – Keras
I use pre-initialized weights as initial weights of the neural network, but the loss value keeps changing every time I train the model. If the initial weights are the same, then the model should predict exactly the same value every time I train it. But the mse keeps changing. Is there anything that I am missing? Answer You have all
tf-nightly-gpu and Keras
So, I was able to get lucky and get my hands on an RTX 3070. Unfortunately, this isn’t working out as well as I would have liked for me when it comes to tensorflow. I’ve spent some time on google and from what I can tell, tf-nightly-gpu is the solution to my issues here. I’ve installed Cuda 11/10, cuDNN, and
How to build TF tensor with ones in specified locations – batch compatible
I apologize for the poor question title but I’m not sure quite how to phrase it. Here’s the problem I’m trying to solve: I have two NNs working off of the same input dataset in my code. One of them is a traditional network while the other is used to limit the acceptable range of the first. This works by
How do I fit tensorflow ImageDataGenerator
I’ve build my model, but do not know how to fit it. Could anyone give me some tip so I can use ImageDataGenerator in my models while working with images, or it is better to use other ways like using Dataset? My directory architecture: PS: I found article where same method used and everything seems to work, but not in
Implementing Multiclass Dice Loss Function
I am doing multi class segmentation using UNet. My input to the model is HxWxC and my output is, Using SparseCategoricalCrossentropy I can train the network fine. Now I would like to also try dice coefficient as the loss function. Implemented as follows, However, I am actually getting an increasing loss instead of decreasing loss. I have checked multiple sources
Modeling Encoder-Decoder according to instructions from a paper [closed]
Closed. This question is opinion-based. It is not currently accepting answers. Want to improve this question? Update the question so it can be answered with facts and citations by editing this post. Closed 2 years ago. Improve this question I am new to this field and I was reading a paper “Predicting citation counts based on deep neural network learning
Tensorflow 2.3, Tensorflow dataset, TypeError: () takes 1 positional argument but 4 were given
I use tf.data.TextLineDataset to read 4 large files and I use tf.data.Dataset.zip to zip these 4 files and create “dataset”. However, I can not pass “dataset” to dataset.map to use tf.compat.v1.string_split and split with t separator and finally use batch, prefetch and finally feed into my model. This is my code: This is error message: What should I do? Answer
Ensemble with voting in deep learning models
I am working on a multimodal deep learning classifiers with RGB-D images. i have developed two seperate models for each case. The first one is a LSTM with CNN in the begining for the RGB images with shape (3046,200,200,3) , and the second one is an LSTM for the depth images with shape (3046,200,200) . I’m trying to figure out
What if the validation step does not fit into numbers of samples?
It’s a bit annoying that tf.keras generator still faces this issue, unlike pytorch. There are many discussions regarding this, however, still stuck with it. Already visit: Meaning of validation_steps in Keras steps_per_epoch does not fit into numbers of samples Problem I have a data set consist of around 21397. I wrote a custom data loader which returns the total number