Skip to content
Advertisement

Tag: gpu

Why does GPU memory increase when recreating and reassigning a JAX numpy array to the same variable name?

When I recreate and reassign a JAX np array to the same variable name, for some reason the GPU memory nearly doubles the first recreation and then stays stable for subsequent recreations/reassignments. Why does this happen and is this generally expected behavior for JAX arrays? Fully runnable minimal example: https://colab.research.google.com/drive/1piUvyVylRBKm1xb1WsocsSVXJzvn5bdI?usp=sharing. For posterity in case colab goes down: Thank you! Answer

pytorch cuda out of memory while inferencing

I think this is a very basic question, my apologies as I am very new to pytorch. I am trying to find if an image is manipulated or not using MantraNet. After running 2-3 inferences I get the CUDA out of memory, then after restarting the kernel also I keep getting the same error: The error is given below: RuntimeError:

Why tensor size was not changed?

I made the toy CNN model. Then, I had checked model.summary via this code And I was able to get the following results: I want to reduce model size cuz i wanna increase the batch size. So, I had changed torch.float32 -> torch.float16 via NVIDIA/apex As a result, torch.dtype was changed torch.float16 from torch.float32. But, Param size (MB): 35.19 was

TypeError: expected CPU (got CUDA)

X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.2,random_state=0) y_train import torch import torch.nn as nn import torch.nn.functional as F when I Run this code I got this error: How to can I solve this error? Answer To transfer the variables to GPU, try the following:

ERROR: Could not find a version that satisfies the requirement dask-cudf (from versions: none)

Describe the bug When I am trying to import dask_cudf I get the following ERROR: I have dask and RAPIDS installed with pip when I search for: pip install dask_cudf original site is not exists anymore: https://pypi.org/project/dask-cudf/ google stored site history: https://webcache.googleusercontent.com/search?q=cache:8in7y2jQFQIJ:https://pypi.org/project/dask-cudf/+&cd=1&hl=en&ct=clnk&gl=uk I am trying to install it with the following code in the Google Colab Window %pip install dask-cudf

tensorflow cannot find GPU

I had install “tensorflow-GPU”, CUDA 10.0. and my GPU is GTX1660 ti. I also tested bu CUDA 10.2 and 11. I added cudnn to windows PATH but I still got this error. Answer I found the problem. the problem was versions of CUDA and cudnn.

Get LightGBM/ LGBM run with GPU on Google Colabratory

I often run LGBM on Google Colabratory and I just found out this page saying that LGBM it set to CPU by default so you need to set up first. https://medium.com/@am.sharma/lgbm-on-colab-with-gpu-c1c09e83f2af So I executed the code recommended on the page or some other codes recommended on stackoverflow as follows, !git clone –recursive https://github.com/Microsoft/LightGBM %cd LightGBM !mkdir build %cd build !cmake

Advertisement