I’m using mambaforge on WSL2 Ubuntu 22.04 with systemd
enabled. I’m trying to install TensorFlow 2.10 with CUDA enabled, by using the command:
mamba install tensorflow
And the command nvidia-smi -q
from WSL2 gives:
==============NVSMI LOG============== Timestamp : Sat Dec 17 23:22:43 2022 Driver Version : 527.56 CUDA Version : 12.0 Attached GPUs : 1 GPU 00000000:01:00.0 Product Name : NVIDIA GeForce RTX 3070 Laptop GPU Product Brand : GeForce Product Architecture : Ampere Display Mode : Disabled Display Active : Disabled Persistence Mode : Enabled MIG Mode Current : N/A Pending : N/A Accounting Mode : Disabled Accounting Mode Buffer Size : 4000 Driver Model Current : WDDM Pending : WDDM Serial Number : N/A GPU UUID : GPU-f03a575d-7930-47f3-4965-290b89514ae7 Minor Number : N/A VBIOS Version : 94.04.3f.00.d7 MultiGPU Board : No Board ID : 0x100 Board Part Number : N/A GPU Part Number : 249D-750-A1 Module ID : 1 Inforom Version Image Version : G001.0000.03.03 OEM Object : 2.0 ECC Object : N/A Power Management Object : N/A GPU Operation Mode Current : N/A Pending : N/A GSP Firmware Version : N/A GPU Virtualization Mode Virtualization Mode : None Host VGPU Mode : N/A IBMNPU Relaxed Ordering Mode : N/A PCI Bus : 0x01 Device : 0x00 Domain : 0x0000 Device Id : 0x249D10DE Bus Id : 00000000:01:00.0 Sub System Id : 0x118C1043 GPU Link Info PCIe Generation Max : 3 Current : 3 Device Current : 3 Device Max : 4 Host Max : 3 Link Width Max : 16x Current : 8x Bridge Chip Type : N/A Firmware : N/A Replays Since Reset : 0 Replay Number Rollovers : 0 Tx Throughput : 0 KB/s Rx Throughput : 0 KB/s Atomic Caps Inbound : N/A Atomic Caps Outbound : N/A Fan Speed : N/A Performance State : P8 Clocks Throttle Reasons Idle : Active Applications Clocks Setting : Not Active SW Power Cap : Not Active HW Slowdown : Not Active HW Thermal Slowdown : Not Active HW Power Brake Slowdown : Not Active Sync Boost : Not Active SW Thermal Slowdown : Not Active Display Clock Setting : Not Active FB Memory Usage Total : 8192 MiB Reserved : 159 MiB Used : 12 MiB Free : 8020 MiB BAR1 Memory Usage Total : 8192 MiB Used : 1 MiB Free : 8191 MiB Compute Mode : Default Utilization Gpu : 0 % Memory : 0 % Encoder : 0 % Decoder : 0 % Encoder Stats Active Sessions : 0 Average FPS : 0 Average Latency : 0 FBC Stats Active Sessions : 0 Average FPS : 0 Average Latency : 0 Ecc Mode Current : N/A Pending : N/A ECC Errors Volatile SRAM Correctable : N/A SRAM Uncorrectable : N/A DRAM Correctable : N/A DRAM Uncorrectable : N/A Aggregate SRAM Correctable : N/A SRAM Uncorrectable : N/A DRAM Correctable : N/A DRAM Uncorrectable : N/A Retired Pages Single Bit ECC : N/A Double Bit ECC : N/A Pending Page Blacklist : N/A Remapped Rows : N/A Temperature GPU Current Temp : 46 C GPU Shutdown Temp : 101 C GPU Slowdown Temp : 98 C GPU Max Operating Temp : 87 C GPU Target Temperature : N/A Memory Current Temp : N/A Memory Max Operating Temp : N/A Power Readings Power Management : Supported Power Draw : 12.08 W Power Limit : 4294967.50 W Default Power Limit : 80.00 W Enforced Power Limit : 100.00 W Min Power Limit : 1.00 W Max Power Limit : 100.00 W Clocks Graphics : 210 MHz SM : 210 MHz Memory : 405 MHz Video : 555 MHz Applications Clocks Graphics : N/A Memory : N/A Default Applications Clocks Graphics : N/A Memory : N/A Deferred Clocks Memory : N/A Max Clocks Graphics : 2100 MHz SM : 2100 MHz Memory : 6001 MHz Video : 1950 MHz Max Customer Boost Clocks Graphics : N/A Clock Policy Auto Boost : N/A Auto Boost Default : N/A Voltage Graphics : 637.500 mV Fabric State : N/A Status : N/A Processes GPU instance ID : N/A Compute instance ID : N/A Process ID : 24 Type : G Name : /Xwayland Used GPU Memory : Not available in WDDM driver model
And my other enviroment works as expected:
⬢ [Systemd] ❯ mamba activate tf ~ via 🅒 tf via 🐏 774MiB/19GiB | 0B/5GiB ⬢ [Systemd] ❯ python Python 3.9.15 | packaged by conda-forge | (main, Nov 22 2022, 08:45:29) [GCC 10.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import tensorflow as tf 2022-12-17 23:25:13.867166: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: SSE4.1 SSE4.2 AVX AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
Then, it tries to install package version cuda112py39h9333c2f_1
, winch uses Python 3.9, but I want Python 3.10. Whenever I try to install the version for 3.10, it shows the error:
Could not solve for environment specs Encountered problems while solving: - nothing provides __cuda needed by tensorflow-2.10.0-cuda112py310he87a039_0 The environment can't be solved, aborting the operation
Why is this error occurring and how can I solve it?
Advertisement
Answer
I ran into this today and found a solution that works (after also seeing your GitHub post). Long story short, you need to use CONDA_CUDA_OVERRIDE
to make this work as described in this conda-forge blog post.
For example, with CUDA 11.8 and mamba, use:
CONDA_CUDA_OVERRIDE="11.8" mamba install tensorflow -c conda-forge
For CUDA 11.8 and conda, it would be:
CONDA_CUDA_OVERRIDE="11.8" conda install tensorflow -c conda-forge
Depending on your setup, you may also want to install cudatoolkit as well, e.g.,
CONDA_CUDA_OVERRIDE="11.8" mamba install tensorflow cudatoolkit -c conda-forge