View previous topic :: View next topic |
Author |
Message |
LeTesla n00b
Joined: 03 Aug 2020 Posts: 3
|
Posted: Sat Apr 20, 2024 5:32 am Post subject: Help with Stable Diffusion on Gentoo |
|
|
Currently I'm moving from Artix to Gentoo and one of the things that was pretty easy to setup compared to Gentoo was all the CUDA-related packages. Gentoo appears to not have the packages for the actual SDK of CUDA, or at least seemingly Gentoo did at some point given that I see some archives for a dev-util/nvidia-cuda-sdk package existing but it stopped after version 10, yet, there's still the dev-util/nvidia-cuda-toolkit package. However, installing this package does not seem to correct the error of Automatic1111 being unable to support my RTX 3050 because it can't find CUDA. So is there any way to go around this? I've heard about downloading the actual CUDA SDK, but the latest from Nvidia's website is version 12 and the package for the toolkit is version 12.4.1, and I am unsure of how to go about manually installing the SDK as well. If there is another method though, I'd be willing to do it as well. |
|
Back to top |
|
|
saturnalia0 Tux's lil' helper
Joined: 13 Oct 2016 Posts: 136
|
Posted: Mon Apr 22, 2024 4:33 pm Post subject: |
|
|
I have a GTX 1060, I only have nvidia-drivers which is needed regardless. Auto1111 works for me just by running the default webui.sh script. What error are you actually getting? How did you install Auto1111? Have you searched the issues on GitHub? PS: I'm using an old release, v1.5.1. |
|
Back to top |
|
|
LeTesla n00b
Joined: 03 Aug 2020 Posts: 3
|
Posted: Mon Apr 22, 2024 7:39 pm Post subject: |
|
|
saturnalia0 wrote: | I have a GTX 1060, I only have nvidia-drivers which is needed regardless. Auto1111 works for me just by running the default webui.sh script. What error are you actually getting? How did you install Auto1111? Have you searched the issues on GitHub? PS: I'm using an old release, v1.5.1. |
A1111 is just installed by itself the usual way, though I had to skirt around venv because I was having issues with using venv, so my pip and my environment are not limited to venv. More specifically, I am using A1111-Forge, a fork of A1111, however I have attempted both and neither work. The error when I run webui.sh is:
Code: | Traceback (most recent call last):
File "/home/lkvass/RAID/stable-diffusion-webui/launch.py", line 48, in <module>
main()
File "/home/lkvass/RAID/stable-diffusion-webui/launch.py", line 39, in main
prepare_environment()
File "/home/lkvass/RAID/stable-diffusion-webui/modules/launch_utils.py", line 386, in prepare_environment
raise RuntimeError(
RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
|
When I run it as ./webui.sh --skip-torch-cuda-test:
Code: | Traceback (most recent call last):
File "/home/lkvass/RAID/stable-diffusion-webui-forge/launch.py", line 51, in <module>
main()
File "/home/lkvass/RAID/stable-diffusion-webui-forge/launch.py", line 47, in main
start()
File "/home/lkvass/RAID/stable-diffusion-webui-forge/modules/launch_utils.py", line 541, in start
import webui
File "/home/lkvass/RAID/stable-diffusion-webui-forge/webui.py", line 17, in <module>
initialize_forge()
File "/home/lkvass/RAID/stable-diffusion-webui-forge/modules_forge/initialization.py", line 50, in initialize_forge
import ldm_patched.modules.model_management as model_management
File "/home/lkvass/RAID/stable-diffusion-webui-forge/ldm_patched/modules/model_management.py", line 122, in <module>
total_vram = get_total_memory(get_torch_device()) / (1024 * 1024)
^^^^^^^^^^^^^^^^^^
File "/home/lkvass/RAID/stable-diffusion-webui-forge/ldm_patched/modules/model_management.py", line 91, in get_torch_device
return torch.device(torch.cuda.current_device())
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/torch/cuda/__init__.py", line 787, in current_device
_lazy_init()
File "/usr/lib/python3.11/site-packages/torch/cuda/__init__.py", line 293, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
|
|
|
Back to top |
|
|
saturnalia0 Tux's lil' helper
Joined: 13 Oct 2016 Posts: 136
|
Posted: Mon Apr 22, 2024 11:10 pm Post subject: |
|
|
Well, has torch been installed with cuda? How did you install it, if not through the vanilla script in a venv? My environment looks like this:
Code: |
stable-diffusion-webui $ source ./venv/bin/activate
(venv) stable-diffusion-webui $ python3
Python 3.11.8 (main, Mar 28 2024, 10:57:06) [GCC 13.2.1 20240210] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.cuda.is_available()
True
|
Honestly I'd just wipe everything and try again with venv which is the default and try to resolve any issues that arise from that. Note that in webui-user.sh there is TORCH_COMMAND which you can use to customize how torch is installed.
See also: https://github.com/pytorch/pytorch/issues/30664 , https://github.com/pytorch/pytorch/issues/30664
Sorry if this doesn't help... |
|
Back to top |
|
|
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|