Skip to content

Tag: gpu

pytorch cuda out of memory while inferencing

I think this is a very basic question, my apologies as I am very new to pytorch. I am trying to find if an image is manipulated or not using MantraNet. After running 2-3 inferences I get the CUDA out of memory, then after restarting the kernel also I keep getting the same error: The error is given below: Runt…

Why tensor size was not changed?

I made the toy CNN model. Then, I had checked model.summary via this code And I was able to get the following results: I want to reduce model size cuz i wanna increase the batch size. So, I had changed torch.float32 -> torch.float16 via NVIDIA/apex As a result, torch.dtype was changed torch.float16 from to…

TypeError: expected CPU (got CUDA)

X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.2,random_state=0) y_train import torch import torch.nn as nn import torch.nn.functional as F when I Run this code I got this error: How to can I solve this error? Answer To transfer the variables to GPU, try the following:

tensorflow cannot find GPU

I had install “tensorflow-GPU”, CUDA 10.0. and my GPU is GTX1660 ti. I also tested bu CUDA 10.2 and 11. I added cudnn to windows PATH but I still got this error. Answer I found the problem. the problem was versions of CUDA and cudnn.