
A1111 is just installed by itself the usual way, though I had to skirt around venv because I was having issues with using venv, so my pip and my environment are not limited to venv. More specifically, I am using A1111-Forge, a fork of A1111, however I have attempted both and neither work. The error when I run webui.sh is:saturnalia0 wrote:I have a GTX 1060, I only have nvidia-drivers which is needed regardless. Auto1111 works for me just by running the default webui.sh script. What error are you actually getting? How did you install Auto1111? Have you searched the issues on GitHub? PS: I'm using an old release, v1.5.1.
Code: Select all
Traceback (most recent call last):
File "/home/lkvass/RAID/stable-diffusion-webui/launch.py", line 48, in <module>
main()
File "/home/lkvass/RAID/stable-diffusion-webui/launch.py", line 39, in main
prepare_environment()
File "/home/lkvass/RAID/stable-diffusion-webui/modules/launch_utils.py", line 386, in prepare_environment
raise RuntimeError(
RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
Code: Select all
Traceback (most recent call last):
File "/home/lkvass/RAID/stable-diffusion-webui-forge/launch.py", line 51, in <module>
main()
File "/home/lkvass/RAID/stable-diffusion-webui-forge/launch.py", line 47, in main
start()
File "/home/lkvass/RAID/stable-diffusion-webui-forge/modules/launch_utils.py", line 541, in start
import webui
File "/home/lkvass/RAID/stable-diffusion-webui-forge/webui.py", line 17, in <module>
initialize_forge()
File "/home/lkvass/RAID/stable-diffusion-webui-forge/modules_forge/initialization.py", line 50, in initialize_forge
import ldm_patched.modules.model_management as model_management
File "/home/lkvass/RAID/stable-diffusion-webui-forge/ldm_patched/modules/model_management.py", line 122, in <module>
total_vram = get_total_memory(get_torch_device()) / (1024 * 1024)
^^^^^^^^^^^^^^^^^^
File "/home/lkvass/RAID/stable-diffusion-webui-forge/ldm_patched/modules/model_management.py", line 91, in get_torch_device
return torch.device(torch.cuda.current_device())
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/torch/cuda/__init__.py", line 787, in current_device
_lazy_init()
File "/usr/lib/python3.11/site-packages/torch/cuda/__init__.py", line 293, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

Code: Select all
stable-diffusion-webui $ source ./venv/bin/activate
(venv) stable-diffusion-webui $ python3
Python 3.11.8 (main, Mar 28 2024, 10:57:06) [GCC 13.2.1 20240210] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.cuda.is_available()
True