Some time ago, I wrote a post about using the proprietary nvidia graphics stack as much as possible,
and stripping the open-source graphics stack which is mostly unused on X11 when the proprietary DDX driver is used.
Here's that post, if anyone's interested: https://forums.gentoo.org/viewtopic-t-1167405.html
Now, I'm writing a post about doing the opposite, using as much of the open-source graphics stack on nvidia on X11, and as little of the proprietary stack as possible.
This post assumes that the user following this is using the maintained branch of the nvidia kernel drivers, either the proprietary modules, or the open modules.
This post also assumes that egl is working on the gbm platform on the nvidia card.
Part 1: Using the modesetting DDX driver instead on the nvidia DDX driver without PRIME render offload.
When using Xorg, the modesetting DDX driver works with nvidia hw, but only unaccelerated.
This means that the gpu is more or less used only as a dumb framebuffer, which is very slow.
For hw acceleration to work with the modesetting driver, glamor support is needed.
Sadly, with the current Xorg git master, glamor does not work on nvidia hw when using the modesetting driver.
However, there is a patch that enabled glamor to work on nvidia hw: https://gitlab.freedesktop.org/xorg/xse ... uests/2057
When using this patch, glamor acceleration should work on nvidia, and all hw acceleration should work.
There are some issues with the hw cursor, but those can be fixed with further patches.
However, there is another X server implementation, Xlibre: https://github.com/X11Libre/xserver
With Xlibre git master, no patching should be required and everything , including hw cursors, should just work with a proper xorg.conf.
Here's an example xorg.conf for one screen and one monitor with an nvidia card:
Code: Select all
Section "ServerLayout"
Identifier "Layout0"
Screen 0 "Screen0"
EndSection
Section "Device"
Identifier "Device0"
Driver "modesetting"
EndSection
Section "Screen"
Identifier "Screen0"
Device "Device0"
EndSection
With the Xorg patch linked above, PRIME render offload might load, but I wouldn't recommend it is the state it is now.
With Xlibre git master, everything should just work in this case too, with no patches applied to the X server, with a proper xorg.conf.
Sadly, this patch for mesa is needed: https://gitlab.freedesktop.org/mesa/mes ... ests/37342 .
This one may or may not be needed too: https://gitlab.freedesktop.org/mesa/mes ... ests/38201
Both of these patches are in mesa git master, but not in any release.
Users either have to build mesa git master, or apply these patches as local patches on top of a mesa release.
Here's an xorg.conf that works on my laptop with an intel iGPU and nvidia dGPU.
This is not the one I use, but this one should work as a base for anyone who wants to use it:
Code: Select all
Section "ServerLayout"
Identifier "Layout0"
#intel gpu
Screen 0 "Screen0"
#nvidia gpu
Screen 1 "Screen1"
EndSection
Section "Device"
Identifier "Device0"
Driver "modesetting"
# Option "GlxVendorLibrary" "nvidia"
#Assuming card0 is intel, use drm_info to check
Option "kmsdev" "/dev/dri/card0"
# VendorName "Intel"
EndSection
Section "Device"
Identifier "Device1"
Driver "modesetting"
Option "GlxVendorLibrary" "nvidia"
#Assuming card1 is nvidia, use drm_info to check
Option "kmsdev" "/dev/dri/card1"
# VendorName "NVIDIA Corporation"
#BusID from lspci
BusID "PCI:1:0:0"
EndSection
Section "Screen"
Identifier "Screen0"
Device "Device0"
EndSection
Section "Screen"
Identifier "Screen1"
Device "Device1"
EndSection

