Am trying to use CUDA as backend for dnn module provided in opencv-4.1.1 , I have build opencv with CUDA enabled, nvidia drivers and CUDA is properly placed on system, here am using manjaro as development platform.
am trying to load pre-trained YOLOv3 weights using cv2.dnn module,
net = cv2.dnn.readNetFromDarknet('yolov3.cfg', 'yolov3.weights')
But it uses CPU as defualt inference engine, but am trying to use GPU as backend IE, from offecial opencv doc, i found following
DNN_TARGET_CPU DNN_TARGET_OPENCL DNN_TARGET_OPENCL_FP16 DNN_TARGET_MYRIAD DNN_TARGET_FPGA
as target backend, but no support for direct GPU inference, so what’s point of compiling opencv with CUDA if it doen’t make use of GPU as underlying inferencing engine,
in order to make sure processes are runing on GPU I posted output of
nvidia-smi
,
+-----------------------------------------------------------------------------+ | NVIDIA-SMI 430.26 Driver Version: 430.26 CUDA Version: 10.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GT 710 Off | 00000000:01:00.0 N/A | N/A | | 40% 40C P0 N/A / N/A | 598MiB / 1998MiB | N/A Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 Not Supported | +-----------------------------------------------------------------------------+
the process id for GPU utilization is 0, clearly means that no process utilizing GPU, any guidance will be appreciated.
Advertisement
Answer
At the moment CUDA support for DNN module way in progress under a GSOC task so there is no official release yet. You can check its repo here to see progress.
Edit: Looks CUDA backend integration is completed and included in the release version 4.2.0, you can check the change logs here.