Code: Select all
ollama run llama3.2
>>> ça va?
Error: POST predict: Post "http://127.0.0.1:43593/completion": EOFCode: Select all
[sebastien@passionlinuxgentoo ~]$ ollama run llama3.2
>>> racontes moi l'histoire de France
Error: POST predict: Post "http://127.0.0.1:36375/completion": EOF
[sebastien@passionlinuxgentoo ~]$ Code: Select all
[I] dev-util/nvidia-cuda-toolkit
Available versions: ~11.8.0-r4(0/11.8.0)^md ~12.3.2(0/12.3.2)^md ~12.4.1(0/12.4.1)^md ~12.5.1(0/12.5.1)^md ~12.6.3-r1(0/12.6.3)^mstd ~12.8.1-r1(0/12.8.1)^mstd (~)12.9.0(0/12.9.0)^mstd {clang debugger examples nsight profiler rdma sanitizer vis-profiler PYTHON_TARGETS="python3_11 python3_12 python3_13"}
Installed versions: 12.9.0(0/12.9.0)^mstd(01:48:51 09/06/2025)(-clang -debugger -examples -nsight -profiler -rdma -sanitizer PYTHON_TARGETS="python3_13 -python3_11 -python3_12")
Homepage: https://developer.nvidia.com/cuda-zone
Description: NVIDIA CUDA Toolkit (compiler and friends)
[I] sci-ml/ollama [1]
Available versions: (~)0.6.5-r1^s (~)0.6.6^s (~)0.6.8^st (~)0.7.0^st (~)0.7.1^st **9999*l^st {blas cuda mkl rocm AMDGPU="+targets_gfx90a targets_gfx803 targets_gfx900 +targets_gfx906 +targets_gfx908 targets_gfx940 targets_gfx941 +targets_gfx942 targets_gfx1010 targets_gfx1011 targets_gfx1012 +targets_gfx1030 targets_gfx1031 +targets_gfx1100 targets_gfx1101 targets_gfx1102" CPU_FLAGS_X86="amx_int8 amx_tile avx avx2 avx512_bf16 avx512_vnni avx512f avx512vbmi avx_vnni bmi2 f16c fma3 sse4_2"}
Installed versions: 0.7.1^st(02:46:03 09/06/2025)(cuda -blas -mkl -rocm AMDGPU="targets_gfx90a targets_gfx906 targets_gfx908 targets_gfx942 targets_gfx1030 targets_gfx1100 -targets_gfx803 -targets_gfx900 -targets_gfx940 -targets_gfx941 -targets_gfx1010 -targets_gfx1011 -targets_gfx1012 -targets_gfx1031 -targets_gfx1101 -targets_gfx1102" CPU_FLAGS_X86="avx avx2 f16c fma3 sse4_2 -avx512_vnni -avx512f -avx512vbmi -avx_vnni -bmi2")
Homepage: https://ollama.com
Description: Get up and running with Llama 3, Mistral, Gemma, and other language models.

