Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Blender: minimum requirement for hardware acceleration?
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Multimedia
View previous topic :: View next topic  
Author Message
VinzC
Watchman
Watchman


Joined: 17 Apr 2004
Posts: 5048
Location: Dark side of the mood

PostPosted: Fri Jan 03, 2020 1:12 pm    Post subject: Blender: minimum requirement for hardware acceleration? Reply with quote

Hi all and a Happy New Year.

Disclaimer: I know a little more than nothing about programmatically using GPU acceleration as of yet so bear with me. I've just started 3D modelling with Blender and I'd like to accelerate renders using the 192 CUDA cores of my GPU:
nvidia-smi:
Fri Jan  3 13:01:37 2020       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 390.116                Driver Version: 390.116                   |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTS 450     Off  | 00000000:01:00.0 N/A |                  N/A |
| 30%   45C    P8    N/A /  N/A |    319MiB /  1985MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0                    Not Supported                                       |
+-----------------------------------------------------------------------------+

I haven't yet been able to go just there yet and I need some information first. So far Blender is installed with those USE flags: bullet cuda cycles dds elbeem ffmpeg game-engine nls openexr openimageio openmp sdl tiff -collada -color-management -debug -doc -fftw -headless -jack -jemalloc -jpeg2k -libav -llvm -man -ndof -openal -opencl -opensubdiv -openvdb -osl -player -sndfile -test -valgrind. As I tried enabling CUDA in Blender, I got an error message on the console:
Code:
"nvcc" -arch=sm_21 --cubin "/usr/share/blender/2.79/scripts/addons/cycles/source/kernel/kernels/cuda/kernel.cu" -o "/home/vinz/.cache/cycles/kernels/cycles_kernel_sm21_E6A2F909D7D8605BCB0051AFD4889651.cubin" -m64 --ptxas-options="-v" --use_fast_math -DNVCC -D__KERNEL_CUDA_VERSION__=91 -I"/usr/share/blender/2.79/scripts/addons/cycles/source"
nvcc fatal   : Value 'sm_21' is not defined for option 'gpu-architecture'

Erm... well, I wonder what that sm_21 is by the way, where it comes from and what I'm supposed to understand from it? Anyway after digging a little I found this:

https://devtalk.nvidia.com/default/topic/1063538/nvcc-warning-the-compute_20-sm_20-and-sm_21-architectures-are-deprecated/ wrote:
compute_20 architecture is deprecated as of CUDA-8 (and support for it was dropped in CUDA-9, so the default architecture for CUDA-9 is compute_30/sm_30).

So I need a CUDA "stack" no younger than 8.0... but then CUDA toolkit 8.0 is masked and I've just installed CUDA toolkit 9.0 and the works. Back to square one I guess :( .

So my questions are
  • can I safely unmask CUDA 8?
  • are there implications?
  • what are they?
I also saw CUDA toolkit requires an old version of GCC. Shall I switch to that compiler manually whenever I emerge packages that rely on CUDA or does portage handle that for me?

Finally I also ran into compilation errors with media-libs/opensubdiv-3.3.3 because it doesn't play well with glibc version > 2.26 (as I gathered from various reports). Those errors are about missing symbols such as __float128 and __mode:
Code:
/opt/cuda/bin/nvcc /portage.d/tmp/portage/media-libs/opensubdiv-3.3.3/work/OpenSubdiv-3_3_3/opensubdiv/osd/cudaKernel.cu -c -o /portage.d/tmp/port
age/media-libs/opensubdiv-3.3.3/work/opensubdiv-3.3.3_build/opensubdiv/CMakeFiles/osd_dynamic_gpu.dir/osd/./osd_dynamic_gpu_generated_cudaKernel.c
u.o -ccbin /usr/lib64/ccache/bin/x86_64-pc-linux-gnu-gcc -m64 -Dosd_dynamic_gpu_EXPORTS -DOPENSUBDIV_VERSION_STRING=\"3.3.3\" -DOPENSUBDIV_HAS_OPE
NGL -DOPENSUBDIV_HAS_OPENMP -DOPENSUBDIV_HAS_TBB -DGLFW_VERSION_3 -DOSD_USES_GLEW -DOPENSUBDIV_HAS_GLSL_TRANSFORM_FEEDBACK -DOPENSUBDIV_HAS_GLSL_C
OMPUTE -DOPENSUBDIV_HAS_CUDA -DOPENSUBDIV_HAS_PTEX -DPTEX_STATIC -Xcompiler ,\"-march=native\",\"-O2\",\"-pipe\",\"-fno-stack-protector\",\"-fPIC\
" -Xcompiler -fPIC --gpu-architecture compute_30 -DNVCC -I/opt/cuda/include -I/portage.d/tmp/portage/media-libs/opensubdiv-3.3.3/work/OpenSubdiv-3
_3_3/opensubdiv -I/usr/include
/usr/include/bits/floatn.h(74): error: invalid argument to attribute "__mode__"

/usr/include/bits/floatn.h(86): error: identifier "__float128" is undefined

/usr/include/bits/floatn.h(74): error: invalid argument to attribute "__mode__"

/usr/include/bits/floatn.h(86): error: identifier "__float128" is undefined

/usr/include/bits/floatn.h(74): error: invalid argument to attribute "__mode__"

/usr/include/bits/floatn.h(86): error: identifier "__float128" is undefined

So my last questions are:
  • what is the bare minimum needed for Blender to render models using GPU acceleration?
  • do I need opencv and opensubdiv?
  • what are the implications of not using those packages?
I know its' a great deal of questions so thanks a whole lot in advance for any help.

P.S. : I'm not considering buying a new video card as my motherboard is an old ASUS P8H67-M, i.e. too old to put anything recent on it, which means I'd have to buy myself a new computer and I quite frankly am not prone to do that for just one software. Sweet obsolescence...
_________________
Gentoo addict: tomorrow I quit, I promise!... Just one more emerge...
1739!


Last edited by VinzC on Sat Jan 04, 2020 2:08 am; edited 1 time in total
Back to top
View user's profile Send private message
mir3x
Guru
Guru


Joined: 02 Jun 2012
Posts: 445

PostPosted: Fri Jan 03, 2020 7:56 pm    Post subject: Reply with quote

I know nothing but you can try Blender in Steam and see if it works better there.

EDIT: under CUDA in settings I see my Geforce and also my cpu, so probabaly there is no requirements for it maybe.
_________________
Sent from Windows
Back to top
View user's profile Send private message
VinzC
Watchman
Watchman


Joined: 17 Apr 2004
Posts: 5048
Location: Dark side of the mood

PostPosted: Fri Jan 03, 2020 10:27 pm    Post subject: Reply with quote

Thank you mir3x for willing to help but you haven't answered my questions. I don't have steam and I don't intend to. I asked all those questions because I first want to understand and be able to sort this out somehow but am really confused at the moment so I need some guidance. My main concern is about the potential consequences of installing masked packages on my system — which I never did in all those Gentoo years. I don't know if it sounds exaggerated but I wouldn't like to screw up my machine and reinstall it from scratch if anything goes wrong, I know I have my limits.
_________________
Gentoo addict: tomorrow I quit, I promise!... Just one more emerge...
1739!
Back to top
View user's profile Send private message
etnull
Apprentice
Apprentice


Joined: 26 Mar 2019
Posts: 263

PostPosted: Fri Jan 03, 2020 10:54 pm    Post subject: Reply with quote

I have GTX770, the last time I tried it worked, maybe it will narrow down your search for answer. Also it isn't fast even with 1536 CUDA cores. (I think it's the old cores, they changed the way they count them at some point) This gtx770 is basically no anywhere close to real time viewport, and barely usable for render, it just might not be worth it for you in the end.
Back to top
View user's profile Send private message
VinzC
Watchman
Watchman


Joined: 17 Apr 2004
Posts: 5048
Location: Dark side of the mood

PostPosted: Sat Jan 04, 2020 2:05 am    Post subject: Reply with quote

etnull wrote:
This gtx770 is basically no anywhere close to real time viewport, and barely usable for render, it just might not be worth it for you in the end.

Thanks, that's a bit enlightening. I have tried unmasking GCC version 5.4 and CUDA toolkit 8.0 so far and my system still is stable and usable but I'll need to monitor it in the long run.

Only trouble I'm facing is the compile errors about missing symbols in "floatn.h" like __float128 and __mode__. Still have no clue how to fix that.

At some point I will check for myself whether or not GPU brings some acceleration. That said the only (simple) render I made took 6 hours to compute on a Core i5 for just 150 frames and some fire simulation. I hope there's room for improvement...
_________________
Gentoo addict: tomorrow I quit, I promise!... Just one more emerge...
1739!
Back to top
View user's profile Send private message
VinzC
Watchman
Watchman


Joined: 17 Apr 2004
Posts: 5048
Location: Dark side of the mood

PostPosted: Sat Jan 04, 2020 10:50 am    Post subject: Reply with quote

Ok, I temporarily bypassed the error message about __float128 types and now I can select CUDA and render something without any error. What I don't understand is that I see barely no difference in time between "GPU Compute" and CPU. With CPU rendering Blender uses 100% on all cores and 100% on just one core with "GPU Compute" — model used is the initial cube. It takes 8 seconds with CPU rendering and 10 seconds with "GPU compute" 8O . I might be doing something wrong but I just don't know what, hence my questions in my initial post.

EDIT: To make GCC-5.4.0 compile as nvcc without triggering the error messages about 128-bit floating point, here's the dirty hack I had to apply, despite being cleaner than what I saw:
diff -Nau old/usr/include/bits/floatn.h new/usr/include/bits/floatn.h:
--- old/usr/include/bits/floatn.h 2019-01-04 10:45:45.000000000 +0200
+++ old/usr/include/bits/floatn.h 2020-01-04 12:38:16.000000000 +0200
@@ -34,6 +34,11 @@
 # define __HAVE_FLOAT128 0
 #endif
 
+/* Shortcut: disable float128 with CUDA 8.0 (i.e. GCC 5.4.0) */
+#if defined(__CUDACC__) && __GNUC__ == 5
+#define __HAVE_FLOAT128 0
+#endif
+
 /* Defined to 1 if __HAVE_FLOAT128 is 1 and the type is ABI-distinct
    from the default float, double and long double types in this glibc.  */
 #if __HAVE_FLOAT128

I also had to run Blender with the following command to have it use GCC-5.4.0 :
Code:
CYCLES_CUDA_EXTRA_CFLAGS="-ccbin gcc-5.4.0" blender

Note that it is also possible to edit /opt/cuda/bin/nvcc.profile and add the following line:
Code:
compiler-bindir  = /usr/x86_64-pc-linux-gnu/gcc-bin/5.4.0/

A third option is to set CYCLES_CUDA_EXTRA_CFLAGS in an environment file under /etc/env.d, e.g. :
/etc/env.d/99blender:
CYCLES_CUDA_EXTRA_CFLAGS="-ccbin gcc-5.4.0"
_________

Now how do I check whether/how the GPU is used when rendering?

Still pending...
_________________
Gentoo addict: tomorrow I quit, I promise!... Just one more emerge...
1739!


Last edited by VinzC on Sat Jan 04, 2020 12:22 pm; edited 1 time in total
Back to top
View user's profile Send private message
etnull
Apprentice
Apprentice


Joined: 26 Mar 2019
Posts: 263

PostPosted: Sat Jan 04, 2020 12:18 pm    Post subject: Reply with quote

I think it should say something about CUDA in the setting and you should be able to select your GPU there. img
I don't remember me using this checkbox, and it still rendered with GPU. If you can have a constantly refined viewprot preview in Cycles which goes from grained to clean, then it's working.
Back to top
View user's profile Send private message
VinzC
Watchman
Watchman


Joined: 17 Apr 2004
Posts: 5048
Location: Dark side of the mood

PostPosted: Sat Jan 04, 2020 12:41 pm    Post subject: Reply with quote

etnull wrote:
I think it should say something about CUDA in the setting and you should be able to select your GPU there. img

It does say that in Blender. I'd just like to know how to *verify* if it's really used at all.

etnull wrote:
If you can have a constantly refined viewprot preview in Cycles which goes from grained to clean, then it's working.

I see no difference between CPU and GPU rendering (i.e. little squares progressively filling the rendering viewport), that's why I'd like some other way to verify that the GPU is *actually* used.
_________________
Gentoo addict: tomorrow I quit, I promise!... Just one more emerge...
1739!
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Multimedia All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum