With all I know. pretrained CNN can do way better than CNN. I have a dataset of 855 images. I have applied CNN and got 94% accuracy.Then I applied Pretrained model (VGG16, ResNet50, Inception_V3, MobileNet)also with fine tuning but still i got highest 60% and two of them are doing very bad on classification. Can CNN really do better than
Tag: deep-learning
How to reproduce the Bottleneck Blocks in Mobilenet V3 with Keras API?
Using Keras API, I am trying to write the MobilenetV3 as explained in this article: https://arxiv.org/pdf/1905.02244.pdf with the architecture as described in this picture: For that, I need to implement the bottloneck_blocks from the previous article https://arxiv.org/pdf/1801.04381.pdf. See image for architecture: I managed to glue together the Initial and final Conv layers: Where the bottleneck_block is given in the next
IndexError: tensors used as indices must be long, byte or bool tensors
I am getting this error only during the testing phase, but I do not face any problem in the training and validation phase. I get this error for the last line in the given code snippet. The code snippet looks like the one below, The “lab” is a tensor value and prints out the range in such a way, (Note*:
Loss function for CNN sliding window model, for multi Object
I implemented a model with python using Keras which is series of convents layer that takes a 512*512 image and converts it to a tensor with the dimensions of 16*16. Im now trying to detect an object in this 16*16 tensor so that it gives me a 1 for detecting and a 0 otherwise. The problem is I don’t know
Cannot use vggface-keras in Tensorflow 2.0
I am trying to use the keras-vggface library from https://github.com/rcmalli/keras-vggface to train a CNN. I have installed tensorflow 2.0.0-rc1, keras 2.3.1, cuda 10.1, cudnn 7.6.5 and the driver’s version is 418, the problem is that when i try to use the vggface model, as a convolutional base, i get an error, here is the code and the error Error! I
How to fix “ResourceExhaustedError: OOM when allocating tensor”
I wanna make a model with multiple inputs. So, I try to build a model like this. and the summary : _ But, when i try to train this model, the problem happens…. : Thanks for reading and hopefully helping me :) Answer OOM stands for “out of memory”. Your GPU is running out of memory, so it can’t allocate
RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same
This: Gives the error: RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same Answer You get this error because your model is on the GPU, but your data is on the CPU. So, you need to send your input tensors to the GPU. Or like this, to stay consistent with the rest of your code: The same
How to change activation layer in Pytorch pretrained module?
How to change the activation layer of a Pytorch pretrained network? Here is my code : Here is my output: Answer ._modules solves the problem for me.
Model not training and negative loss when whitening input data
I am doing segmentation and my dataset is kinda small (1840 images) so I would like to use data-augmentation. I am using the generator provided in the keras documentation which yield a tuple with a batch of images and corresponding masks that got augmented the same way. I am then training my model with this generator : But by using
Python 3 causes memory error at shuffle(X,Y) where X is 36000 3-channel images (36000, 256,256,3) and Y is 3-channel normal data (36000, 256,256,3)
Following image showing Memory Usage: Memory error occurs. I am using Numpy and Python3. I have two numpy arrays of shape (36000,256,256,3) each as X and Y and memory error occurs when I do following code. They are code to prepare training data. Is there another way to do it which uses lesser memory? This is my processor: IntelĀ® Xeon(R)