Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
[SOLVED] Optimus and Nvidia
View unanswered posts
View posts from last 24 hours

Goto page 1, 2  Next  
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware
View previous topic :: View next topic  
Author Message
Arthanis
Apprentice
Apprentice


Joined: 21 Mar 2008
Posts: 161

PostPosted: Mon Aug 04, 2014 10:09 pm    Post subject: [SOLVED] Optimus and Nvidia Reply with quote

I found this post (https://forums.gentoo.org/viewtopic-t-959568.html) and posted my doubt, but it seems dead, so forgive me if it is a double post:

Well, I just bought a laptop with optimus technology, and I tried bumblebee and didn`t like it. I read around and I understand that it is now possible to use the nvidia gpu without bumblebee even though my laptop lacks a hardware muxer to disable the intel.
So and followed all the instructions here but couldn' t get it to work. I uninstalled xdm and bumblebee to make things simplier.

These are my adapters:

Code:

lspci -v |grep VGA
00:02.0 VGA compatible controller: Intel Corporation 4th Gen Core Processor Integrated Graphics Controller (rev 06) (prog-if 00 [VGA controller])

lspci -v |grep 3D
01:00.0 3D controller: NVIDIA Corporation GM107M [GeForce GTX 860M] (rev a2)



Here is my pci devices and kernel modules being used:

Code:

00:00.0 Host bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor DRAM Controller (rev 06)
   Subsystem: CLEVO/KAPOK Computer Device 2300
00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x16 Controller (rev 06)
   Kernel driver in use: pcieport
00:02.0 VGA compatible controller: Intel Corporation 4th Gen Core Processor Integrated Graphics Controller (rev 06)
   Subsystem: CLEVO/KAPOK Computer Device 2300
   Kernel driver in use: i915
   Kernel modules: i915
00:03.0 Audio device: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor HD Audio Controller (rev 06)
   Subsystem: CLEVO/KAPOK Computer Device 2300
   Kernel driver in use: snd_hda_intel
   Kernel modules: snd_hda_intel
00:14.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB xHCI (rev 05)
   Subsystem: CLEVO/KAPOK Computer Device 2300
   Kernel driver in use: xhci_hcd
00:16.0 Communication controller: Intel Corporation 8 Series/C220 Series Chipset Family MEI Controller #1 (rev 04)
   Subsystem: CLEVO/KAPOK Computer Device 2300
   Kernel driver in use: mei_me
   Kernel modules: mei_me
00:1a.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #2 (rev 05)
   Subsystem: CLEVO/KAPOK Computer Device 2300
   Kernel driver in use: ehci-pci
00:1b.0 Audio device: Intel Corporation 8 Series/C220 Series Chipset High Definition Audio Controller (rev 05)
   Subsystem: CLEVO/KAPOK Computer Device 2300
   Kernel driver in use: snd_hda_intel
   Kernel modules: snd_hda_intel
00:1c.0 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #1 (rev d5)
   Kernel driver in use: pcieport
00:1c.2 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #3 (rev d5)
   Kernel driver in use: pcieport
00:1c.3 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #4 (rev d5)
   Kernel driver in use: pcieport
00:1d.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #1 (rev 05)
   Subsystem: CLEVO/KAPOK Computer Device 2300
   Kernel driver in use: ehci-pci
00:1f.0 ISA bridge: Intel Corporation HM87 Express LPC Controller (rev 05)
   Subsystem: CLEVO/KAPOK Computer Device 2300
   Kernel driver in use: lpc_ich
00:1f.2 SATA controller: Intel Corporation 8 Series/C220 Series Chipset Family 6-port SATA Controller 1 [AHCI mode] (rev 05)
   Subsystem: CLEVO/KAPOK Computer Device 2300
   Kernel driver in use: ahci
00:1f.3 SMBus: Intel Corporation 8 Series/C220 Series Chipset Family SMBus Controller (rev 05)
   Subsystem: CLEVO/KAPOK Computer Device 2300
01:00.0 3D controller: NVIDIA Corporation GM107M [GeForce GTX 860M] (rev a2)
   Subsystem: CLEVO/KAPOK Computer Device 2300
   Kernel driver in use: nvidia
   Kernel modules: nvidia
03:00.0 Network controller: Intel Corporation Centrino Advanced-N 6235 (rev 24)
   Subsystem: Intel Corporation Centrino Advanced-N 6235 AGN
   Kernel driver in use: iwlwifi
   Kernel modules: iwlwifi
04:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. Device 5287 (rev 01)
   Subsystem: CLEVO/KAPOK Computer Device 2300
04:00.1 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 12)
   Subsystem: CLEVO/KAPOK Computer Device 2300
   Kernel driver in use: r8169



Here is my full dmesg on boot: http://pastebin.com/T1STepGj

Nvidia seems to load fine:
Code:


g1310max linux # dmesg |grep -i nvidia
[    2.140090] nvidia: module license 'NVIDIA' taints kernel.
[    2.171864] Modules linked in: nvidia(PO+) iwldvm snd_hda_codec_via i915(+) snd_hda_codec_generic joydev coretemp iwlwifi snd_hda_intel snd_hda_codec mei_me drm_kms_helper mei snd_hwdep
[    2.172645] CPU: 2 PID: 1343 Comm: nvidia-smi Tainted: P          IO 3.14.14-gentoo #26
[    2.175708]  [<ffffffffa07c86e7>] nvidia_open+0x567/0x8f0 [nvidia]
[    2.175868]  [<ffffffffa07d2dcf>] nvidia_frontend_open+0x4f/0xa0 [nvidia]
[    4.888691] [drm] Initialized nvidia-drm 0.0.0 20130102 for 0000:01:00.0 on minor 1
[    4.888746] NVRM: loading NVIDIA UNIX x86_64 Kernel Module  340.24  Wed Jul  2 14:24:20 PDT 2014



Although there are some warnings in kernel about intel:

Code:

g1310max linux # dmesg |grep -i vga
[    0.000000] Console: colour VGA+ 80x25
[    0.148899] vgaarb: device added: PCI:0000:00:02.0,decodes=io+mem,owns=io+mem,locks=none
[    0.149048] vgaarb: loaded
[    0.149137] vgaarb: bridge control possible 0000:00:02.0
[    2.070477] vgaarb: device changed decodes: PCI:0000:00:02.0,olddecodes=io+mem,decodes=io+mem:owns=io+mem
[    2.205517] [drm] GMBUS [i915 gmbus vga] timed out, falling back to bit banging on pin 2


I don`t know what is it about this timeout warning.


I have enabled intel kernel mode setting with intel_agpgart and intel drivers, here is my complete kernel config:

http://pastebin.com/LvsuqA0T

Relevant kernel config entries:
Code:

CONFIG_DRM=y
CONFIG_DRM_KMS_HELPER=m
CONFIG_DRM_KMS_FB_HELPER=y
# CONFIG_DRM_LOAD_EDID_FIRMWARE is not set
# CONFIG_DRM_I2C_CH7006 is not set
# CONFIG_DRM_I2C_SIL164 is not set
# CONFIG_DRM_I2C_NXP_TDA998X is not set
# CONFIG_DRM_TDFX is not set
# CONFIG_DRM_R128 is not set
# CONFIG_DRM_RADEON is not set
# CONFIG_DRM_NOUVEAU is not set
CONFIG_DRM_I810=m
CONFIG_DRM_I915=m
CONFIG_DRM_I915_KMS=y
CONFIG_DRM_I915_FBDEV=y
# CONFIG_DRM_I915_PRELIMINARY_HW_SUPPORT is not set
# CONFIG_DRM_I915_UMS is not set
# CONFIG_DRM_MGA is not set
# CONFIG_DRM_SIS is not set
# CONFIG_DRM_VIA is not set
# CONFIG_DRM_SAVAGE is not set
# CONFIG_DRM_VMWGFX is not set
# CONFIG_DRM_GMA500 is not set
# CONFIG_DRM_UDL is not set
# CONFIG_DRM_AST is not set
# CONFIG_DRM_MGAG200 is not set
# CONFIG_DRM_CIRRUS_QEMU is not set
# CONFIG_DRM_QXL is not set
# CONFIG_DRM_BOCHS is not set


I also have x11-drivers/xf86-video-intel, xf86-video-modesetting, nvidia-drivers and x11-base/xorg-drivers (with VIDEO_CARDS="intel modesetting nvidia") on their latest versions:

Code:

eix xorg-drivers
[I] x11-base/xorg-drivers
     Available versions:  1.9 1.10 1.11 1.12 1.13 1.14 1.15 ~1.16 {INPUT_DEVICES="acecad aiptek elographics evdev fpit hyperpen joystick keyboard mouse mutouch penmount synaptics tslib vmmouse void wacom" VIDEO_CARDS="apm ark ast chips cirrus dummy epson fbdev fglrx freedreno geode glint i128 i740 impact intel mach64 mga modesetting neomagic newport nouveau nv nvidia omap omapfb qxl r128 radeon radeonsi rendition s3 s3virge savage siliconmotion sis sisusb sunbw2 suncg14 suncg3 suncg6 sunffb sunleo suntcx tdfx tga trident tseng v4l vesa via virtualbox vmware voodoo"}
     Installed versions:  1.15(12:34:58 08/04/14)(INPUT_DEVICES="evdev keyboard mouse synaptics -acecad -aiptek -elographics -fpit -hyperpen -joystick -mutouch -penmount -tslib -vmmouse -void -wacom" VIDEO_CARDS="intel modesetting nvidia -apm -ast -chips -cirrus -dummy -epson -fbdev -fglrx -freedreno -geode -glint -i128 -i740 -mach64 -mga -neomagic -nouveau -nv -omap -omapfb -qxl -r128 -radeon -radeonsi -rendition -s3virge -savage -siliconmotion -sisusb -sunbw2 -suncg14 -suncg3 -suncg6 -sunffb -sunleo -suntcx -tdfx -tga -trident -tseng -v4l -vesa -via -virtualbox -vmware -voodoo")

eix -I xf86

[I] x11-drivers/xf86-video-intel
     Available versions:  ~*2.9.1 2.19.0 2.20.13 2.21.15 ~2.99.903 ~2.99.905-r1 ~2.99.906 ~2.99.907-r1 ~2.99.909 ~2.99.910 ~2.99.911-r1 ~2.99.912 ~2.99.912-r1 ~2.99.914 {debug dri glamor (+)sna +udev uxa xvmc}
     Installed versions:  2.21.15(11:12:21 08/04/14)(dri sna udev -glamor -uxa -xvmc)
     Description:         X.Org driver for Intel cards

[I] x11-drivers/xf86-video-modesetting
     Available versions:  0.8.1 ~0.9.0
     Installed versions:  0.8.1(10:43:55 08/04/14)
     Description:         Unaccelerated generic driver for kernel modesetting

[I] x11-drivers/nvidia-drivers
     Available versions:  96.43.23^msd 173.14.39^msd 304.123^msd 331.89^msd 334.21-r3^msd 337.25^msd 340.24^msd {+X acpi custom-cflags gtk multilib pax_kernel (+)tools uvm KERNEL="FreeBSD linux"}
     Installed versions:  340.24^msd(12:02:08 08/04/14)(X acpi tools -multilib -pax_kernel -uvm KERNEL="linux -FreeBSD")
     Description:         NVIDIA Accelerated Graphics Driver


As well, the respective modules are loaded:

Quote:

lsmod
Module Size Used by
cpufreq_ondemand 8149 4
nvidia 10474972 2
iwldvm 124399 0
snd_hda_codec_via 19239 1
i915 685228 4
snd_hda_codec_generic 49601 2 snd_hda_codec_via
joydev 9324 0
coretemp 6198 0
iwlwifi 75304 1 iwldvm
snd_hda_intel 28463 1
snd_hda_codec 76005 3 snd_hda_codec_via,snd_hda_codec_generic,snd_hda_intel
mei_me 7774 0
drm_kms_helper 28073 1 i915
mei 44535 1 mei_me
snd_hwdep 5916 1 snd_hda_codec


Also, I tried to recompile mesa specifically with i915 VIDEO_CARDS variable to no avail:

Code:

[I] media-libs/mesa
     Available versions:  [M]7.10.3 [M]7.11.2 [M]8.0.4-r1 [M]~9.0.3 9.1.6 ~9.2.5-r1 10.0.4 ~10.1.0 ~10.1.1 ~10.1.3 ~10.1.4 ~10.1.6 ~10.2.1 ~10.2.2 ~10.2.4 {bindist +classic debug +dri3 +egl g3dvl +gallium gbm gles gles1 gles2 hardened (+)llvm (+)llvm-shared-libs motif +nptl opencl openmax openvg osmesa pax_kernel pic r600-llvm-compiler selinux shared-dricore +shared-glapi vdpau wayland xa xorg xvmc ABI_MIPS="n32 n64 o32" ABI_PPC="32 64" ABI_S390="32 64" ABI_X86="32 64 x32" KERNEL="FreeBSD" PYTHON_SINGLE_TARGET="python2_7" PYTHON_TARGETS="python2_7" VIDEO_CARDS="freedreno i915 i965 ilo intel mach64 mga nouveau r100 r128 r200 r300 r600 radeon radeonsi savage sis tdfx via vmware"}
     Installed versions:  10.0.4(12:38:13 08/04/14)(classic egl gallium gbm llvm nptl xa -bindist -debug -gles1 -gles2 -llvm-shared-libs -opencl -openvg -osmesa -pax_kernel -pic -r600-llvm-compiler -selinux -vdpau -wayland -xvmc ABI_MIPS="-n32 -n64 -o32" ABI_PPC="-32 -64" ABI_S390="-32 -64" ABI_X86="64 -32 -x32" KERNEL="-FreeBSD" VIDEO_CARDS="i965 intel -freedreno -i915 -ilo -nouveau -r100 -r200 -r300 -r600 -radeon -radeonsi -vmware")


I think it's also worth mentioning that I had to set a kernel command line on grub to get modesetting on kernel:

Code:

cat /etc/default/grub |grep 915
GRUB_CMDLINE_LINUX_DEFAULT="drm_kms_helper.edid_firmware=edid/g1310max.bin i915.modeset=1"

cat /sys/module/i915/parameters/modeset
1


The only way I get my Xorg to work is if I have /etc/X11/xorg.conf, which give me the following /var/log/Xorg.0.log:

http://pastebin.com/rC10K0wf

Which seems to yield intel i965 driver with kms correctly:

Code:

cat /var/log/Xorg.0.log |grep -i intel
[   602.720] (==) Matched intel as autoconfigured driver 0
[   602.720] (==) Matched intel as autoconfigured driver 3
[   602.720] (II) LoadModule: "intel"
[   602.720] (II) Loading /usr/lib64/xorg/modules/drivers/intel_drv.so
[   602.723] (II) Module intel: vendor="X.Org Foundation"
[   602.724] (II) intel: Driver for Intel(R) Integrated Graphics Chipsets:
[   602.730] (II) intel(0): Creating default Display subsection in Screen section
[   602.730] (==) intel(0): Depth 24, (--) framebuffer bpp 32
[   602.730] (==) intel(0): RGB weight 888
[   602.730] (==) intel(0): Default visual is TrueColor
[   602.730] (--) intel(0): Integrated Graphics Chipset: Intel(R) HD Graphics 4600
[   602.730] (--) intel(0): CPU: x86-64, sse2, sse3, ssse3, sse4.1, sse4.2, avx, avx2
[   602.730] (**) intel(0): Framebuffer tiled
[   602.730] (**) intel(0): Pixmaps tiled
[   602.730] (**) intel(0): "Tear free" disabled
[   602.730] (**) intel(0): Forcing per-crtc-pixmaps? no
[   602.730] (II) intel(0): Output eDP1 has no monitor section
[   602.730] (--) intel(0): found backlight control interface acpi_video0 (type 'firmware')
[   602.730] (II) intel(0): Output VGA1 has no monitor section
[   602.730] (II) intel(0): Output DP1 has no monitor section
[   602.730] (II) intel(0): Output HDMI1 has no monitor section
[   602.730] (--) intel(0): Output eDP1 using initial mode 1920x1080 on pipe 0
[   602.730] (==) intel(0): DPI set to (96, 96)
[   602.731] (II) intel(0): SNA initialized with Haswell (gen7.5, gt2) backend
[   602.731] (==) intel(0): Backing store enabled
[   602.731] (==) intel(0): Silken mouse enabled
[   602.731] (II) intel(0): HW Cursor enabled
[   602.731] (II) intel(0): RandR 1.2 enabled, ignore the following RandR disabled message.
[   602.731] (==) intel(0): DPMS enabled
[   602.731] (II) intel(0): [DRI2] Setup complete
[   602.731] (II) intel(0): [DRI2]   DRI driver: i965
[   602.731] (II) intel(0): direct rendering: DRI2 Enabled
[   602.731] (==) intel(0): hotplug detection: "enabled"
[   602.738] (II) intel(0): switch to mode 1920x1080@60.0 on pipe 0 using eDP1, position (0, 0), rotation normal
[   602.747] (II) intel(0): Setting screen physical size to 508 x 285
[   602.791] (II) config/udev: Adding input device HDA Intel HDMI HDMI (/dev/input/event10)
[   602.791] (II) config/udev: Adding input device HDA Intel PCH Mic (/dev/input/event7)
[   602.791] (II) config/udev: Adding input device HDA Intel PCH Front Headphone (/dev/input/event6)
[   602.843] (II) intel(0): EDID vendor "CMN", prod id 4931
[   602.843] (II) intel(0): Printing DDC gathered Modelines:
[   602.843] (II) intel(0): Modeline "1920x1080"x0.0  138.78  1920 1966 1996 2080  1080 1082 1086 1112 -hsync -vsync (66.7 kHz eP)
[   602.843] (II) intel(0): Modeline "1920x1080"x0.0   92.52  1920 1966 1996 2080  1080 1082 1086 1112 -hsync -vsync (44.5 kHz e)



Although I'm not sure because of the last line of this snippet:

Code:

cat /var/log/Xorg.0.log |grep -i modesetting
[   602.720] (==) Matched modesetting as autoconfigured driver 4
[   602.723] (II) LoadModule: "modesetting"
[   602.723] (II) Loading /usr/lib64/xorg/modules/drivers/modesetting_drv.so
[   602.723] (II) Module modesetting: vendor="X.Org Foundation"
[   602.724] (II) modesetting: Driver for Modesetting Kernel Drivers: kms
[   602.729] (WW) Falling back to old probe method for modesetting


Does this fallback mean that kms isn't working?

Anyway, without the xorg file, I get a funcional X environment, but without 3D acceleration:

Code:

glxinfo               
name of display: :0.0
Xlib:  extension "GLX" missing on display ":0.0".
Xlib:  extension "GLX" missing on display ":0.0".
Xlib:  extension "GLX" missing on display ":0.0".
Xlib:  extension "GLX" missing on display ":0.0".
Xlib:  extension "GLX" missing on display ":0.0".
Error: couldn't find RGB GLX visual or fbconfig
Xlib:  extension "GLX" missing on display ":0.0".
Xlib:  extension "GLX" missing on display ":0.0".
Xlib:  extension "GLX" missing on display ":0.0".
Xlib:  extension "GLX" missing on display ":0.0".
Error: couldn't find RGB GLX visual or fbconfig

Xlib:  extension "GLX" missing on display ":0.0".
Xlib:  extension "GLX" missing on display ":0.0".
Xlib:  extension "GLX" missing on display ":0.0".
Xlib:  extension "GLX" missing on display ":0.0".
Xlib:  extension "GLX" missing on display ":0.0".
Xlib:  extension "GLX" missing on display ":0.0".
Xlib:  extension "GLX" missing on display ":0.0".


So, following this topic, I tried with the following xorg.conf:

Code:

cat /etc/X11/xorg.conf
Section "ServerLayout"
    Identifier "layout"
    Screen 0 "nvidia"
    Inactive "intel"
EndSection

Section "Device"
    Identifier "nvidia"
    Driver "nvidia"
    BusID "PCI:1:00.0 "
EndSection

Section "Screen"
    Identifier "nvidia"
    Device "nvidia"
    # Uncomment this line if your computer has no display devices connected to
    # the NVIDIA GPU.  Leave it commented if you have display devices
    # connected to the NVIDIA GPU that you would like to use.
    Option "UseDisplayDevice" "none"
EndSection

Section "Device"
    Identifier "intel"
    Driver "modesetting"
    BusID "PCI:0:02.0"
EndSection

Section "Screen"
    Identifier "intel"
    Device "intel"
EndSection



And with the following contents from .xinitrc :
Code:

cat .xinitrc
xrandr --setprovideroutputsource modesetting NVIDIA-0
xrandr --auto
exec startxfce4 --with-ck-launch


Which gives me the following Xorg.0.log:

http://pastebin.com/3d3RUSdT

It seems to try to load nvidia module, but fails:

Code:

Failed to initialize GLX extension (Compatible NVIDIA X driver not found)


Which is strange, because everything seems to be in order. Any help would be appreciated. Thank you very much in advance.


Last edited by Arthanis on Sun Aug 10, 2014 9:27 pm; edited 1 time in total
Back to top
View user's profile Send private message
Yamakuzure
Advocate
Advocate


Joined: 21 Jun 2006
Posts: 2273
Location: Bardowick, Germany

PostPosted: Tue Aug 05, 2014 8:56 am    Post subject: Reply with quote

It is possible to use nvidia *only* with intel disabled. Yes. If you want to switch, you need something that actually can do the switch. Bumblebee + Primusrun is the best I could find so far. (From "Bumblebee" Overlay)

Otherwise I do not understand what you want to achieve.

The xorg.conf you posted is one used by switchers like bumblebee. It does not let you use your nvidia card directly but configures your system to run on the Intel HD. With the configs you posted it will not be possible to use your nvidia card at all.
_________________
Important German:
  1. "Aha" - German reaction to pretend that you are really interested while giving no f*ck.
  2. "Tja" - German reaction to the apocalypse, nuclear war, an alien invasion or no bread in the house.
Back to top
View user's profile Send private message
Arthanis
Apprentice
Apprentice


Joined: 21 Mar 2008
Posts: 161

PostPosted: Tue Aug 05, 2014 3:28 pm    Post subject: Reply with quote

Well, I see. I guess I misunterstood this post (https://forums.gentoo.org/viewtopic-t-959568.html) because of this:

Quote:
Nouveau driver and bumblebeed : 1500 fps in 5 seconds (glxgears)
Nvidia driver and bumblebeed : 1500 to 1700 fps in 5 seconds (glxgears)
New driver support the optimus by nvidia : 8000 to 8800 fps in 5 seconds (glxgears)


Anyway, so no matter which nvidia proprietary driver I choose, do I have to enable hybrid graphics (CONFIG_VGA_SWITCHROO) on the kernel or not?
Back to top
View user's profile Send private message
krinn
Watchman
Watchman


Joined: 02 May 2003
Posts: 7071

PostPosted: Tue Aug 05, 2014 3:34 pm    Post subject: Re: Optimus and Nvidia Reply with quote

Arthanis wrote:

Nvidia seems to load fine:
Code:
g1310max linux # dmesg |grep -i nvidia
[    2.140090] nvidia: module license 'NVIDIA' taints kernel.
[    2.171864] Modules linked in: nvidia(PO+) iwldvm snd_hda_codec_via i915(+) snd_hda_codec_generic joydev coretemp iwlwifi snd_hda_intel snd_hda_codec mei_me drm_kms_helper mei snd_hwdep
[    2.172645] CPU: 2 PID: 1343 Comm: nvidia-smi Tainted: P          IO 3.14.14-gentoo #26
[    2.175708]  [<ffffffffa07c86e7>] nvidia_open+0x567/0x8f0 [nvidia]
[    2.175868]  [<ffffffffa07d2dcf>] nvidia_frontend_open+0x4f/0xa0 [nvidia]
[    4.888691] [drm] Initialized nvidia-drm 0.0.0 20130102 for 0000:01:00.0 on minor 1
[    4.888746] NVRM: loading NVIDIA UNIX x86_64 Kernel Module  340.24  Wed Jul  2 14:24:20 PDT 2014



No, for me it looks like you have a kernel oops with nvidia, but we only get the two line that have the nvidia keyword in them because of your grep.
This is a oops part for me, and it mean your driver doesn't load fine as you think.
Code:
[    2.175708]  [<ffffffffa07c86e7>] nvidia_open+0x567/0x8f0 [nvidia]
[    2.175868]  [<ffffffffa07d2dcf>] nvidia_frontend_open+0x4f/0xa0 [nvidia]
Back to top
View user's profile Send private message
Arthanis
Apprentice
Apprentice


Joined: 21 Mar 2008
Posts: 161

PostPosted: Tue Aug 05, 2014 3:48 pm    Post subject: Reply with quote

krinn, I linked a full dmesg in pastebin on the post, would you please check it out and see if it seems indeed an oops?

EDIT: Here is what I think is the relevant part of dmesg regarding nvidia:

Code:

[    2.133071] nvidia: module license 'NVIDIA' taints kernel.
[    2.133075] Disabling lock debugging due to kernel taint
[    2.171903] [drm] GMBUS [i915 gmbus vga] timed out, falling back to bit banging on pin 2
[    2.180007] systemd-udevd[1191]: renamed network interface wlan0 to wlp3s0
[    2.180967] Switched to clocksource tsc
[    2.182087] fbcon: inteldrmfb (fb0) is primary device
[    2.182514] BUG: unable to handle kernel NULL pointer dereference at           (null)
[    2.182519] IP: [<ffffffff816926d7>] __down_common+0x45/0xe4
[    2.182520] PGD c6320067 PUD c5ef3067 PMD 0
[    2.182522] Oops: 0002 [#1] SMP
[    2.182527] Modules linked in: nvidia(PO+) iwldvm coretemp snd_hda_codec_via snd_hda_codec_generic iwlwifi joydev mei_me(+) mei i915(+) snd_hda_intel snd_hda_codec drm_kms_helper snd_hwdep
[    2.182529] CPU: 0 PID: 1342 Comm: nvidia-smi Tainted: P          IO 3.14.14-gentoo #26
[    2.182530] Hardware name: Notebook                         W230SS                 /W230SS                 , BIOS 4.6.5 04/15/2014
[    2.182530] task: ffff880225d587a0 ti: ffff880223f40000 task.ti: ffff880223f40000
[    2.182533] RIP: 0010:[<ffffffff816926d7>]  [<ffffffff816926d7>] __down_common+0x45/0xe4
[    2.182533] RSP: 0018:ffff880223f41ac8  EFLAGS: 00010092
[    2.182534] RAX: 0000000000000000 RBX: ffffffffa0bb1400 RCX: ffff880223f41ad8
[    2.182535] RDX: 7fffffffffffffff RSI: ffffffffa0bb1408 RDI: ffffffffa0bb1400
[    2.182535] RBP: ffff880223f41b28 R08: 0000000000018890 R09: 0000000000000027
[    2.182536] R10: 000000000000001f R11: 0000000000000024 R12: 7fffffffffffffff
[    2.182537] R13: 0000000000000002 R14: 0000000000000000 R15: ffff880225d587a0
[    2.182538] FS:  00007ffc1013c700(0000) GS:ffff88022fa00000(0000) knlGS:0000000000000000
[    2.182538] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[    2.182539] CR2: 0000000000000000 CR3: 00000000c5ea4000 CR4: 00000000001407f0
[    2.182539] Stack:
[    2.182541]  0000000000000000 0000000100020002 ffffffffa0bb1408 0000000000000000
[    2.182542]  ffffffff000280da ffffffff00000141 ffff880225d587a0 ffffffffa0bb1400
[    2.182543]  ffff8800c6128000 0000000000000002 ffff88022586c700 00000000000000ff
[    2.182544] Call Trace:
[    2.182547]  [<ffffffff816927d5>] __down+0x18/0x1a
[    2.182550]  [<ffffffff8107e58c>] down+0x3c/0x50
[    2.182616]  [<ffffffffa07b56e7>] nvidia_open+0x567/0x8f0 [nvidia]
[    2.182668]  [<ffffffffa07bfdcf>] nvidia_frontend_open+0x4f/0xa0 [nvidia]
[    2.182670]  [<ffffffff8111bf17>] chrdev_open+0xa7/0x180
[    2.182673]  [<ffffffff81115263>] do_dentry_open+0x243/0x2d0
[    2.182674]  [<ffffffff8111be70>] ? cdev_put+0x30/0x30
[    2.182676]  [<ffffffff81115320>] finish_open+0x30/0x40
[    2.182678]  [<ffffffff811262fe>] do_last.isra.54+0x71e/0xd00
[    2.182680]  [<ffffffff81122783>] ? inode_permission+0x13/0x50
[    2.182682]  [<ffffffff81122c09>] ? link_path_walk+0x69/0x7e0
[    2.182683]  [<ffffffff81126999>] path_openat+0xb9/0x620
[    2.182685]  [<ffffffff8112719e>] do_filp_open+0x3e/0xa0
[    2.182687]  [<ffffffff81133502>] ? __alloc_fd+0x42/0x120
[    2.182689]  [<ffffffff81116aa7>] do_sys_open+0x137/0x220
[    2.182691]  [<ffffffff81116bad>] SyS_open+0x1d/0x20
[    2.182693]  [<ffffffff81699c66>] system_call_fastpath+0x1a/0x1f
[    2.182707] Code: 48 8d 77 08 4d 89 ee 41 54 41 81 e6 81 00 00 00 49 89 d4 53 48 89 fb 48 83 ec 38 48 8b 47 10 48 89 75 b0 48 89 4f 10 48 89 45
b8 <48> 89 08 4c 89 e8 83 e0 01 4c 89 7d c0 c6 45 c8 00 48 89 45 a8
[    2.182709] RIP  [<ffffffff816926d7>] __down_common+0x45/0xe4
[    2.182710]  RSP <ffff880223f41ac8>
[    2.182710] CR2: 0000000000000000
[    2.182711] ---[ end trace 0bfcf8d45430bc02 ]---
.
.
.

[    2.305556] random: nonblocking pool is initialized
[    2.337567] EXT4-fs (sda2): re-mounted. Opts: data=ordered,commit=600
[    3.722738] [drm] Enabling RC6 states: RC6 on, RC6p off, RC6pp off
[    4.843462] Console: switching to colour frame buffer device 240x67
[    4.856882] i915 0000:00:02.0: fb0: inteldrmfb frame buffer device
[    4.856884] i915 0000:00:02.0: registered panic notifier
[    4.856979] [Firmware Bug]: ACPI(PEGP) defines _DOD but not _DOS
[    4.857010] ACPI: Video Device [PEGP] (multi-head: yes  rom: yes  post: no)
[    4.857076] input: Video Bus as /devices/LNXSYSTM:00/device:00/PNP0A08:00/device:4d/LNXVIDEO:00/input/input15
[    4.858947] ACPI: Video Device [GFX0] (multi-head: yes  rom: no  post: no)
[    4.859603] acpi device:53: registered as cooling_device4
[    4.859684] input: Video Bus as /devices/LNXSYSTM:00/device:00/PNP0A08:00/LNXVIDEO:01/input/input16
[    4.859738] [drm] Initialized i915 1.6.0 20080730 for 0000:00:02.0 on minor 0
[    4.859782] [drm] Initialized nvidia-drm 0.0.0 20130102 for 0000:01:00.0 on minor 1
[    4.859789] NVRM: loading NVIDIA UNIX x86_64 Kernel Module  340.24  Wed Jul  2 14:24:20 PDT 2014
.
.
.
[ 1885.406357] bbswitch: version 0.8
[ 1885.406364] bbswitch: Found integrated VGA device 0000:00:02.0: \_SB_.PCI0.GFX0
[ 1885.406371] bbswitch: Found discrete VGA device 0000:01:00.0: \_SB_.PCI0.PEG0.PEGP
[ 1885.406382] ACPI Warning: \_SB_.PCI0.PEG0.PEGP._DSM: Argument #4 type mismatch - Found [Buffer], ACPI requires [Package] (20131218/nsarguments-95)
[ 1885.406472] bbswitch: detected an Optimus _DSM function
[ 1885.406480] bbswitch: Succesfully loaded. Discrete card 0000:01:00.0 is on



It looks OK to me. Anyway I rebooted after the post and the Xorg message about nvidia is gone.


Last edited by Arthanis on Tue Aug 05, 2014 3:57 pm; edited 1 time in total
Back to top
View user's profile Send private message
Arthanis
Apprentice
Apprentice


Joined: 21 Mar 2008
Posts: 161

PostPosted: Tue Aug 05, 2014 3:50 pm    Post subject: Reply with quote

Yamakuzure, doesnt this arcticle here (http://wiki.gentoo.org/wiki/NVIDIA_Driver_with_Optimus_Laptops) talk about bumblebee-less setup ?

Quote:
Note
This is about the native Optimus support in x11-drivers/nvidia-drivers - It is *not* about bumblebee - bumblebee is not used in this configuration.


Or does it only work with muxed hardware?
Back to top
View user's profile Send private message
Yamakuzure
Advocate
Advocate


Joined: 21 Jun 2006
Posts: 2273
Location: Bardowick, Germany

PostPosted: Wed Aug 06, 2014 11:27 am    Post subject: Reply with quote

Arthanis wrote:
Yamakuzure, doesnt this arcticle here (http://wiki.gentoo.org/wiki/NVIDIA_Driver_with_Optimus_Laptops) talk about bumblebee-less setup ?

Quote:
Note
This is about the native Optimus support in x11-drivers/nvidia-drivers - It is *not* about bumblebee - bumblebee is not used in this configuration.


Or does it only work with muxed hardware?
No idea. All the game and play with xrandr do not work for me anyway, because on my (muxless) system:
Code:
 ~ $ xrandr --listproviders
Providers: number : 1
Provider 0: id: 0x48 cap: 0xb, Source Output, Sink Output, Sink Offload crtcs: 4 outputs: 5 associated providers: 0 name:Intel
Further they all want to switch OpenGL to nvidia. I will *not* do this with an OpenGL driven window manager like kwin. ;)
_________________
Important German:
  1. "Aha" - German reaction to pretend that you are really interested while giving no f*ck.
  2. "Tja" - German reaction to the apocalypse, nuclear war, an alien invasion or no bread in the house.
Back to top
View user's profile Send private message
cyberjun
Apprentice
Apprentice


Joined: 06 Nov 2005
Posts: 293

PostPosted: Wed Aug 06, 2014 12:00 pm    Post subject: Reply with quote

Hi,

What worked for me was this: https://wiki.archlinux.org/index.php/NVIDIA_Optimus#LightDM. With lightdm, I get to see the login screen as the xrandr commands are run before login. I have a Lenovo G500s laptop with optimus. I had almost given up and then I tried this arch wiki.

Works very nicely. HDMI out works too, along with suspend/hibernate etc.

--cyberjun
Back to top
View user's profile Send private message
Arthanis
Apprentice
Apprentice


Joined: 21 Mar 2008
Posts: 161

PostPosted: Wed Aug 06, 2014 6:46 pm    Post subject: Reply with quote

Quote:

Hi,
What worked for me was this: https://wiki.archlinux.org/index.php/NVIDIA_Optimus#LightDM. With lightdm, I get to see the login screen as the xrandr commands are run before login. I have a Lenovo G500s laptop with optimus. I had almost given up and then I tried this arch wiki.


Well, It seems that this tutorial is bumblebee-less because it suggest to use bumblebee optionally at the end of the article. Anyway I will try both ways. I will keep you guys updated.
Back to top
View user's profile Send private message
cyberjun
Apprentice
Apprentice


Joined: 06 Nov 2005
Posts: 293

PostPosted: Thu Aug 07, 2014 3:18 am    Post subject: Reply with quote

Yes. it is bumblebee less. You have to remove bumblebee from your runlevels and also remove/blacklist bbswitch module.

--cyberjun
Back to top
View user's profile Send private message
Arthanis
Apprentice
Apprentice


Joined: 21 Mar 2008
Posts: 161

PostPosted: Thu Aug 07, 2014 7:20 am    Post subject: Reply with quote

Cyberjun, would you please enlight me? It's all so confusing. How can I use optimus without bbumblebee if I have a muxless laptop? And to do that, do I nees Do I need CONFIG_VGASWITCHERROO ir not? No BBSWITCH?

And also, it all goes back to the beginning of my post. Everytime I try to startx with taht kind of xorg.conf, it gives me the dreaded "no screens found" error. And if am able to do it, does powermizer (nvidia gpu frequency scale) work?
Back to top
View user's profile Send private message
cyberjun
Apprentice
Apprentice


Joined: 06 Nov 2005
Posts: 293

PostPosted: Thu Aug 07, 2014 8:17 am    Post subject: Reply with quote

Hi,
I don't know what muxless means. On my laptop, I can switch exclusively to Integrated Graphics or Hybrid (in BIOS) but not exclusively to discreete (NVIDIA). Does that mean muxless?
You don't need CONFIG_VGA_SWITCHEROO. In fact disable it if already enabled. Also I don't have bbswitch loaded. bumblebee service is also disabled. If you get this setup right, nvidia-settings shows powermizer working.

My Nvidia card shows up like this in lspci output:
Quote:
01:00.0 3D controller: NVIDIA Corporation GF117M [GeForce 610M/710M/820M / GT 620M/625M/630M/720M] (rev a1)


My Xorg.conf:
Quote:
Section "ServerLayout"
Identifier "Layout0"
Screen 0 "nvidia"
Inactive "intel"
EndSection

Section "Monitor"
Identifier "Monitor0"
Option "DPMS"
EndSection

Section "Device"
Identifier "nvidia"
Driver "nvidia"
BusID "PCI:1:0:0"
EndSection

Section "Device"
Identifier "intel"
Driver "modesetting"
BusID "PCI:0:2:0"
EndSection


Section "Screen"
Identifier "nvidia"
Device "nvidia"
Option "UseDisplayDevice" "none"
EndSection

Section "Screen"
Identifier "intel"
Device "intel"
Monitor "Monitor0"
EndSection


cat /etc/lightdm/display_setup.sh
Quote:

lsmod | grep nvidia
if [ $? -eq 0 ]; then
xrandr --setprovideroutputsource modesetting NVIDIA-0
xrandr --auto
fi


I don't have bbswitch autoloaded.

cat /etc/modprobe.d/nvidia.conf
Quote:
# Nvidia drivers support
alias char-major-195 nvidia
alias /dev/nvidiactl char-major-195

# To tweak the driver the following options can be used, note that
# you should be careful, as it could cause instability!! For more
# options see /usr/share/doc/nvidia-drivers-340.24/README
#
# !!! SECURITY WARNING !!!
# DO NOT MODIFY OR REMOVE THE DEVICE FILE RELATED OPTIONS UNLESS YOU KNOW
# WHAT YOU ARE DOING.
# ONLY ADD TRUSTED USERS TO THE VIDEO GROUP, THESE USERS MAY BE ABLE TO CRASH,
# COMPROMISE, OR IRREPARABLY DAMAGE THE MACHINE.
options nvidia NVreg_DeviceFileMode=432 NVreg_DeviceFileUID=0 NVreg_DeviceFileGID=27 NVreg_ModifyDeviceFiles=1
Back to top
View user's profile Send private message
Yamakuzure
Advocate
Advocate


Joined: 21 Jun 2006
Posts: 2273
Location: Bardowick, Germany

PostPosted: Thu Aug 07, 2014 10:02 am    Post subject: Reply with quote

From the wikipedia page on nvidia optimus: (MUX == "[MU]ltiple[X]er")

"Optimus avoids usage of a hardware multiplexer and prevents glitches associated with changing the display driver from IGP to GPU by transferring the display surface from the GPU frame buffer over the PCI Express bus to the main memory-based framebuffer used by the IGP. The Optimus Copy Engine is a new alternative to traditional DMA transfers between the GPU framebuffer memory and main memory used by the IGP"

In other words: You can not use your nvidia card directly, it is always indirect. Without a multiplexer there is no direct connection between any video out and the card.

And further: "The binary Nvidia driver added partial Optimus support May 3, 2013 in the 319.17.[5] As of May 2013, power management for discrete card is not supported, which means it cannot save battery by turning off Nvidia graphic card completely."

So while it is a nice thing that the nvidia linux drivers natively support optimus in some way, they are out of the window for people like me with an often traveling laptop. I need to be able to have the discrete card switched off completely, and only bumblebe can do this at the moment.

However, what is the problem anyway? Is it too much asked to write "primusrun foo" instead of just "foo"? Actually I find this more convenient, as I have absolute control over when the nvidia card is switched on and when not.
_________________
Important German:
  1. "Aha" - German reaction to pretend that you are really interested while giving no f*ck.
  2. "Tja" - German reaction to the apocalypse, nuclear war, an alien invasion or no bread in the house.
Back to top
View user's profile Send private message
Dr.Willy
Guru
Guru


Joined: 15 Jul 2007
Posts: 498
Location: NRW, Germany

PostPosted: Thu Aug 07, 2014 12:37 pm    Post subject: Reply with quote

Yamakuzure wrote:
However, what is the problem anyway? Is it too much asked to write "primusrun foo" instead of just "foo"? Actually I find this more convenient, as I have absolute control over when the nvidia card is switched on and when not.

Well, how do you for instance run flash videos on your nvidia card?
Running the browser with primusrun doesn't seem like a good idea, because at least for me firefox is one of those "always on" programs.
Back to top
View user's profile Send private message
Yamakuzure
Advocate
Advocate


Joined: 21 Jun 2006
Posts: 2273
Location: Bardowick, Germany

PostPosted: Thu Aug 07, 2014 2:43 pm    Post subject: Reply with quote

Dr.Willy wrote:
Yamakuzure wrote:
However, what is the problem anyway? Is it too much asked to write "primusrun foo" instead of just "foo"? Actually I find this more convenient, as I have absolute control over when the nvidia card is switched on and when not.

Well, how do you for instance run flash videos on your nvidia card?
Running the browser with primusrun doesn't seem like a good idea, because at least for me firefox is one of those "always on" programs.
Why would I want to run something as simple as flash videos on the nvidia card? There would nothing be gained.
No matter whether I start firefox with intel or nvidia, The Firefox FPS test always ends up with 60+. And I haven't found anything on youtube yet, that couldn't be displayed full screen without problems. (And I doubt there is anything flash based out there that really needs the nvidia acceleration. If so, using primusrun/optirun for that alone would be simple enough.)

If you have a bad performance with the intel chipset, then there might be a different problem?
_________________
Important German:
  1. "Aha" - German reaction to pretend that you are really interested while giving no f*ck.
  2. "Tja" - German reaction to the apocalypse, nuclear war, an alien invasion or no bread in the house.
Back to top
View user's profile Send private message
Arthanis
Apprentice
Apprentice


Joined: 21 Mar 2008
Posts: 161

PostPosted: Thu Aug 07, 2014 2:43 pm    Post subject: Reply with quote

Quote:

However, what is the problem anyway? Is it too much asked to write "primusrun foo" instead of just "foo"? Actually I find this more convenient, as I have absolute control over when the nvidia card is switched on and when not.


My reasons (fell free to refute them):

-Well, for starters its kinda pain to run stuff like STEAM, since it calls many others executables and I think it would be a pain to run all of them using optirun/primus.
-For what I've read until now, using bumblebee/optirun seems to have a signifcant performance hit. I don't know about primus.
-As awesome as Bumblebee is, its setup is kinda messy, with all bbswitch, virtualGL and such. Specially using gentoo, I like to keep things as simple and lightweight as possible, with minimal dependencies and daemons (don't we all?)
-From what I've read until now, HDMI out with bumblebee does not support sound
-Most of the time, I have a power source available, so if I lose 25% of my battery time, I won't bother.
- I had a macbook running linux (late 2011) with only nvidia and gentoo, and the battery lasted quite a while (a little more than 3 hours) because of powermizer. If powermizer works, I really think that optimus is an overkill that introduces a lot of unecessary complexity, so I would like to test this for myself. Also, I buy nvidia mostly because of the linux support, so the lack of proper optimus support pisses me off to the point that I would rather get rid of optimus.
- I read that compiz and bumblebee won't play along very well (http://askubuntu.com/questions/84369/running-compiz-using-optirun)


In a perfect world, I would like to be able use only intel (as it is now) and everytime I decide to run steam and/or compiz, I would link a different xorg.conf and restart X. But if its not possible (or require too much hassle), I prefer nvidia only.
Back to top
View user's profile Send private message
Yamakuzure
Advocate
Advocate


Joined: 21 Jun 2006
Posts: 2273
Location: Bardowick, Germany

PostPosted: Fri Aug 08, 2014 2:06 pm    Post subject: Reply with quote

Arthanis wrote:
Quote:

However, what is the problem anyway? Is it too much asked to write "primusrun foo" instead of just "foo"? Actually I find this more convenient, as I have absolute control over when the nvidia card is switched on and when not.


My reasons (fell free to refute them):

-Well, for starters its kinda pain to run stuff like STEAM, since it calls many others executables and I think it would be a pain to run all of them using optirun/primus.
Look up this Official Howto how to configure Steam to only use the nvidia card (using primusrun) on intense games that actually need it.
Arthanis wrote:
-For what I've read until now, using bumblebee/optirun seems to have a signifcant performance hit. I don't know about primus.
Yes, there is a virtual X-Server in between. But it is far less than using virtualgl directly, and primusrun is a lot faster than optirun. No idea how it correlates on a MUX'd system using nvidia directly. On my muxless system the difference is < 5%.
Quote:
-As awesome as Bumblebee is, its setup is kinda messy, with all bbswitch, virtualGL and such. Specially using gentoo, I like to keep things as simple and lightweight as possible, with minimal dependencies and daemons (don't we all?)
Well, I did "emerge primusrun" and then added /etc/init.d/bumblebee to the default runlevel. Nothing messy about that. (Of course I had to unkeyword the live ebuilds from the used packages.)
Quote:
-From what I've read until now, HDMI out with bumblebee does not support sound
-Most of the time, I have a power source available, so if I lose 25% of my battery time, I won't bother.
Those are good points. Although it's more like ~66% of the battery time. However, if your Laptop does not 'travel', then why fiddle with the stuff anyway instead of switching to nvidia-only via BIOS?
Quote:

- I had a macbook running linux (late 2011) with only nvidia and gentoo, and the battery lasted quite a while (a little more than 3 hours) because of powermizer. If powermizer works, I really think that optimus is an overkill that introduces a lot of unecessary complexity, so I would like to test this for myself. Also, I buy nvidia mostly because of the linux support, so the lack of proper optimus support pisses me off to the point that I would rather get rid of optimus.
- I read that compiz and bumblebee won't play along very well (http://askubuntu.com/questions/84369/running-compiz-using-optirun)
Good points, right. I wouldn't know, as I do not use compiz for over 3 years now. kwin has enough effects for my liking. But on other DMs/WMs I can see why one would like to use compiz. However, isn't your intel HD chipset strong enough for compiz? I had an old Core2Duo four years ago with integrated intel only, and compiz worked just fine on two monitors. For todays Intel HD chips it should be a joke, right?
Quote:
In a perfect world, I would like to be able use only intel (as it is now) and everytime I decide to run steam and/or compiz, I would link a different xorg.conf and restart X. But if its not possible (or require too much hassle), I prefer nvidia only.
So the steam problem is solved above, and maybe compiz is no problem at all? ;)

Everybody should use what works best for them. If nvidia-only works best for you, then to hell with the intel HD chip. :D
_________________
Important German:
  1. "Aha" - German reaction to pretend that you are really interested while giving no f*ck.
  2. "Tja" - German reaction to the apocalypse, nuclear war, an alien invasion or no bread in the house.
Back to top
View user's profile Send private message
Arthanis
Apprentice
Apprentice


Joined: 21 Mar 2008
Posts: 161

PostPosted: Sat Aug 09, 2014 1:40 am    Post subject: Reply with quote

Well, I was finally able to use nvidia-only with proprietary drivers. But before I mark this post as SOLVED, I will try to give it back to the community by summarizing all the experience and knowledge I got about this subject, because its not clear what all the options for optimus on linux are. Please if anyone see any error feel free to correct any misconception I may have had.

As far as I'm concerned, these are the options when it comes to optimus on Linux:

Option A: If you have a muxed hardware, you can disable the integrated GPU, use only nvidia either with nvidia-drivers or noveau, and be happy

If you have muxless (or willing to use the integrated chip), the options are:

Option B: Use nvidia proprietary drivers support for optimus, using nvidia chip all the time

If you choose to use bumblebee (with iether noveau or nvidia), there are two backend that transport the data from the framebuffer:
Option C: Bumblebee + Virtual GL ("network" backend, slower)
Option D: Bumblebee + primus (local backend, faster)

Options A and B:
-PROS: Less hassle, better performance
-CONS: More Heat, more power consumption

Options C and D
-PROS: Flexibility, better power management
-CONS: More hassle, more daemons, more bugs, less stability, no HDMI sound, compulsory use of DM (i like using auto startx after shell login, without any DM)

I finally got it working with option B, with the following steps:

- Enable intel modesetting, intel i915 in kernel, rcu_igp_delay, agpgart, drm_i915_kms, agp_intel
- Enable the following options in /etc/default/grub:
Code:
GRUB_CMDLINE_LINUX_DEFAULT="drm_kms_helper.edid_firmware=edid/g1310max.bin i915.modeset=1 rcutree.rcu_idle_gp_delay=1"

- Enabling VIDEOCARDS="intel modesettings nvidia" in make.conf, which will pull xorg-drivers and xf86-video-intel when emerging xorg-server
- Install xrandr
- Creating the following ~/.xinitrc:
Code:

xrandr --setprovideroutputsource modesetting NVIDIA-0
xrandr --auto
exec ck-launch-session startxfce4  #for xfce4, probably works with any other DE


- The following xorg.conf:
Code:

Section "ServerLayout"
    Identifier     "Layout0"
    Screen      0  "nvidia" 0 0
    Inactive       "intel"
    InputDevice    "Keyboard0" "CoreKeyboard"
    InputDevice    "Mouse0" "CorePointer"
EndSection

Section "InputDevice"
    # generated from default
    Identifier     "Keyboard0"
    Driver         "keyboard"
EndSection

Section "InputDevice"
    # generated from data in "/etc/conf.d/gpm"
    Identifier     "Mouse0"
    Driver         "mouse"
    Option         "Protocol"
    Option         "Device" "/dev/input/mice"
    Option         "Emulate3Buttons" "no"
    Option         "ZAxisMapping" "4 5"
EndSection

Section "Monitor"
    Identifier     "Monitor0"
    Option         "DPMS"
EndSection

Section "Device"
    Identifier     "nvidia"
    Driver         "nvidia"
    BusID          "PCI:1:0:0"
EndSection

Section "Device"
    Identifier     "intel"
    Driver         "modesetting"
    BusID          "PCI:0:2:0"
EndSection

Section "Screen"
    Identifier     "nvidia"
    Device         "nvidia"
    Monitor        "Monitor0"
    DefaultDepth    24
    Option         "UseDisplayDevice" "none"
    SubSection     "Display"
        Depth       24
        Modes      "nvidia-auto-select"
    EndSubSection
EndSection

Section "Screen"
    Identifier     "intel"
    Device         "intel"
    Monitor        "Monitor0"
EndSection



- I HATE THE DAMN DMs, so I put in my ~/.bash_profile
Code:

#!/bin/sh
startx


That being said, there are somethings that I quite don't understand is when to use VGA_SWITCHEROO. I have it disabled and works fine for me.

Answering Yamakuzure:

Quote:

Although it's more like ~66% of the battery time. However, if your Laptop does not 'travel', then why fiddle with the stuff anyway instead of switching to nvidia-only via BIOS?


Simply because unfortunately I don't have a muxed hardwared =/ . I really don't believe that its 66%, since I checked and powermizer is working. Im benchmarking it right now, since I can now switch between nvidia only and intel only using xorg.conf and will post the result.

I would like to suggest that after someone who knows more than me and make the appropriate corrections, make a sticky thread about this, because it seems to me that there are still a lot of questions about the subject.

Anyway, thank you all guys for the support, I'm very happy that I got it working, since now I can switch manually between intel and nvidia at will. I will come back with the results of my battery time benchmark.
Back to top
View user's profile Send private message
Moriah
Advocate
Advocate


Joined: 27 Mar 2004
Posts: 2117
Location: Kentucky

PostPosted: Sat Aug 09, 2014 3:56 am    Post subject: Reply with quote

I recently upgraded from a Lenovo w500 to a w530, mainly to get 8 cores and 32 GB of ram. I always used the w500 iin intel mode and used the vga output to an external monitor when at my desk. When on battery, I had the same 1920x1200 resolution on the laptop screen as I had on the external vga. The 2 monitors had exactly the same picture.

The w530 is a downgrade as far as the built-in screen is concerned. The laptop screen is 1920x1080. But the graphics controller is either intel, or nvidia K100M, which has very high resolution capabilities via the mini-displayport. The vga connector on the w530 gives me problems. I tried to connect it to the same external monitor I used with my w500, and it failed to read the EDID info, so the resolution was early 1990's style 800x600. :evil:

So I decided to bite the bullet and got a Dell P2815Q monitor that does 3840 x 2160. :D

It works great under w7, which I only installed to prove that the hardware would work.

It must use the displayport input to the monitor, which means it must use the mini-displayport output from the w530. But the mini-displayport output only works with the nvidia controller. So I must get the nvidia controller working under gentoo linux.

I boot my laptop to a command prompt login, not an X-server, so all that flap about blank screen before login will not affect me. I do, however, switch between the tty consoles with alt-ctl-F1 thru F6, and between multiple X-servers on alt-ctl-F7 thru F11. F12 displays my syslog.

I run a minimal window manager: ctwm, which is basicly a very simple twm with multiple virtual desktops.

Since it will be impossible to keep the same picture on the laptop screen and the Dell monitor (given I run the Dell in it 3840x2160 mode), I am willing to give up the laptop display entirely when using the external monitor. I usually close the lid when using the external monitor on the w500 anyway.

I am currently running:
Code:

onesimus ~ $ uname -a
Linux onesimus 3.12.21-gentoo-r1 #2 SMP PREEMPT Tue Jun 17 21:26:50 EDT 2014 x86_64 Intel(R) Core(TM) i7-3740QM CPU @ 2.70GHz GenuineIntel GNU/Linux
rj@onesimus ~ $


I have my BIOS configured for the laptop display as the boot display, optimus enabled, and auto-detectable.

It boots into gentoo and runs X fine on the built-in laptop display.

What do I need to do to get the nvidia controller working with the Dell hires monitor?

I do not play video games or other stressing animated video stuff, nor CAD, etc. I do software development, so I like lots of text on the screen, but it doesn't need extreme update rate performance, just extreme resolution.

I operate on battery with the intel graphics chip only, and would like to keep it that way, but I need the nvidia chip for the hires monitor when I am at my desk.

I appreciate all the help I can get on this.

PS: I expect one day, after all this other stuff is working, to upgrade my laptop built-in screen to the screen used in the w540, which is 2880x1620. My wife has a Dell laptop with a 31??x1800 touch screen, and the resolution looks beautiful. I suspect I would need to use the nvidia controller to drive that higher resolution 2880x1620 screen, but I am not sure.
_________________
The MyWord KJV Bible tool is at http://www.elilabs.com/~myword

Foghorn Leghorn is a Warner Bros. cartoon character.
Back to top
View user's profile Send private message
Moriah
Advocate
Advocate


Joined: 27 Mar 2004
Posts: 2117
Location: Kentucky

PostPosted: Sat Aug 09, 2014 9:30 pm    Post subject: Reply with quote

I have installed the nvidia driver and have at least that much working, as when I boot the new kernel, I can see:
Code:

onesimus ~ # lsmod | grep nv
nvidiafb               39840  0
cfbfillrect             3790  1 nvidiafb
cfbimgblt               2354  1 nvidiafb
vgastate                8801  1 nvidiafb
cfbcopyarea             3298  1 nvidiafb
fb_ddc                  1407  1 nvidiafb
onesimus ~ #

Also, I have set the BIOS to use the mini-displayport for output, and the discrete graphics is enabled, optimus is disabled.

This let the laptop boot and uses the external Dell monitor as the boot display. That all works.

BTW I finally figured out how to pair and connect to my microsoft bluetooth mouse and keyboard under the new bluez5 stuff. It ain't pretty, but at least it works. I'll try to pretty it up later, after I get the nvidia and Dell monitor working properly.

Now I start X, right from the command prompt, using X &, and that works too. I can then start a display manager with DISPLAY=:0 twm &, and I get the mouse menu and all that works. I can switch between the bare console and the X display on the Dell monitor with alt-ctl-F1 and alt-ctl-F7, just like I do when running with the integrated intel video on the laptop screen.

The problem is the X display is onl 1024x768, which is a victory because at least it displays something, but hardly the resolution I paid for when I shucked out the bucks for the nice super hi res monitor. :evil:

Furthermore, the get-edid | parse-edid sequence fails to read the EDID from the Dell monitor:
Code:

onesimus ~ # get-edid | parse-edid
parse-edid: parse-edid version 2.0.0
get-edid: get-edid version 2.0.0

        Performing real mode VBE call
        Interrupt 0x10 ax=0x4f00 bx=0x0 cx=0x0
        Function supported
        Call successful

        VBE version 300
        VBE string at 0x11100 "NVIDIA"

VBE/DDC service about to be called
        Report DDC capabilities

        Performing real mode VBE call
        Interrupt 0x10 ax=0x4f15 bx=0x0 cx=0x0
        Function supported
        Call successful

        Monitor and video card combination does not support DDC1 transfers
        Monitor and video card combination does not support DDC2 transfers
        0 seconds per 128 byte EDID block transfer
        Screen is not blanked during DDC transfer

Reading next EDID block

VBE/DDC service about to be called
        Read EDID

        Performing real mode VBE call
        Interrupt 0x10 ax=0x4f15 bx=0x1 cx=0x0
        Function supported
        Call failed

The EDID data should not be trusted as the VBE call failed
Error: output block unchanged
parse-edid: IO error reading EDID
onesimus ~ #


So, since I can be happy booting for the external monitor, and re-booting for the builtin monitor, My problem now is to get the resolution up to snuff on the external monitor.

Any suggestions? :?:
_________________
The MyWord KJV Bible tool is at http://www.elilabs.com/~myword

Foghorn Leghorn is a Warner Bros. cartoon character.
Back to top
View user's profile Send private message
Arthanis
Apprentice
Apprentice


Joined: 21 Mar 2008
Posts: 161

PostPosted: Sun Aug 10, 2014 9:26 pm    Post subject: Reply with quote

Moriah, O don't know how to solve your problem, but I do know that the module nvidiafb you have loaded, conflicts with nvidia-drivers, so for starters, I would remove or blacklist it.

Anyway, Im marking this thread as solved. The results for power management are:

intel only: about 4:00 hours of battery power
nvidia : about 3:40 hours of battery power

Obviously, I did not use any gpu intense software, just normal browsing and stuff. Thank you all for the help.
Back to top
View user's profile Send private message
Yamakuzure
Advocate
Advocate


Joined: 21 Jun 2006
Posts: 2273
Location: Bardowick, Germany

PostPosted: Mon Aug 11, 2014 9:14 am    Post subject: Reply with quote

Well it surely depends on usage. I can only say that on my laptop 'powertop' reports a power consumption between 15 and 18 Watts on intel, and 48 to 52 Watts on nvidia. But then, I am using kwin with opengl backend, so I'd have more usage of the gpu anyway.
_________________
Important German:
  1. "Aha" - German reaction to pretend that you are really interested while giving no f*ck.
  2. "Tja" - German reaction to the apocalypse, nuclear war, an alien invasion or no bread in the house.
Back to top
View user's profile Send private message
Arthanis
Apprentice
Apprentice


Joined: 21 Mar 2008
Posts: 161

PostPosted: Mon Aug 11, 2014 2:30 pm    Post subject: Reply with quote

I couldn't agree more. Im using xfce with very little compositing. No compiz nor nothing. Would you please tell me which command line you used for powertop? Mine has the following output:
Code:
powertop
Summary: -nan wakeups/second,  -nan GPU ops/seconds, -nan VFS ops/sec and -0.0% CPU use

                Usage       Events/s    Category       Description
            100.0%                      Device         Display backlight
              2.1 pkts/s                Device         Network interface: wlp3s0 (iwlwifi)
              0.0 pkts/s                Device         Network interface: enp4s0f1 (r8169)
            100.0%                      Device         PCI Device: Intel Corporation 8 Series/C220 Series Chipset Family MEI Controller #1
            100.0%                      Device         PCI Device: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor HD Audio Controller
            100.0%                      Device         PCI Device: Intel Corporation 8 Series/C220 Series Chipset High Definition Audio Controller
            100.0%                      Device         PCI Device: Intel Corporation HM87 Express LPC Controller
            100.0%                      Device         PCI Device: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #1
            100.0%                      Device         PCI Device: Intel Corporation 8 Series/C220 Series Chipset Family 6-port SATA Controller 1 [
            100.0%                      Device         PCI Device: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #3
            100.0%                      Device         PCI Device: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x16 Control
            100.0%                      Device         PCI Device: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #4
            100.0%                      Device         PCI Device: Realtek Semiconductor Co., Ltd. Device 5287
            100.0%                      Device         PCI Device: NVIDIA Corporation GM107M [GeForce GTX 860M]
              0.0%                      Device         PCI Device: Intel Corporation 8 Series/C220 Series Chipset Family USB xHCI
              0.0%                      Device         PCI Device: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #1
              0.0%                      Device         USB device: EHCI Host Controller
              0.0%                      Device         USB device: usb-device-8087-07da
              0.0%                      Device         USB device: usb-device-8087-8008
              0.0%                      Device         PCI Device: Intel Corporation 8 Series/C220 Series Chipset Family SMBus Controller
              0.0%                      Device         alsa:hwC0D0
              0.0%                      Device         PCI Device: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #2
              0.0%                      Device         USB device: EHCI Host Controller
              0.0%                      Device         PCI Device: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor DRAM Controller
              0.0%                      Device         USB device: xHCI Host Controller
              0.0%                      Device         USB device: usb-device-8087-8000
              0.0 pkts/s                Device         nic:sit0
              0.0%                      Device         USB device: xHCI Host Controller



Thanks in advance.
Back to top
View user's profile Send private message
Yamakuzure
Advocate
Advocate


Joined: 21 Jun 2006
Posts: 2273
Location: Bardowick, Germany

PostPosted: Mon Aug 11, 2014 3:14 pm    Post subject: Reply with quote

No special command line options. just 'powertop'.

powertop only displays the important values when you are on battery. ;) Otherwise it is quite limited. Here is a comparison:

powertop on AC:
Code:
PowerTOP 2.5      Overview   Idle stats   Frequency stats   Device stats   Tunables                                     

Summary: 1876.2 wakeups/second,  63.9 GPU ops/seconds, 0.0 VFS ops/sec and 18.6% CPU use

                Usage       Events/s    Category       Description
             70.5 ms/s     697.2        Process        /opt/vmware/lib/vmware/bin/vmware-vmx.real -s vmx.stdio.keep=TRUE -# product=1;name=VMware Workstation;version=10.0.2;buildnumb
             34.3 ms/s     355.0        Process        /usr/bin/firefox
              9.9 ms/s     182.9        Process        kwin


powertop on battery:
Code:
PowerTOP 2.5      Overview   Idle stats   Frequency stats   Device stats   Tunables                                     

The battery reports a discharge rate of 24.7 W
The estimated remaining time is 2 hours, 32 minutes

Summary: 1313.1 wakeups/second,  40.9 GPU ops/seconds, 0.0 VFS ops/sec and 13.5% CPU use

                Usage       Events/s    Category       Description
             65.8 ms/s     457.9        Process        /opt/vmware/lib/vmware/bin/vmware-vmx.real -s vmx.stdio.keep=TRUE -# product=1;name=VMware Workstation;version=10.0.2;buildnumb
              3.4 ms/s     206.2        Timer          hrtimer_wakeup
             22.8 ms/s     139.5        Process        /usr/bin/firefox
              5.8 ms/s      84.8        Process        kwin
Note: VMware runs on nvidia using primusrun, so it is a bit intense. right now.
_________________
Important German:
  1. "Aha" - German reaction to pretend that you are really interested while giving no f*ck.
  2. "Tja" - German reaction to the apocalypse, nuclear war, an alien invasion or no bread in the house.
Back to top
View user's profile Send private message
Maffblaster
Developer
Developer


Joined: 01 May 2007
Posts: 65
Location: Spokane, Washington, USA

PostPosted: Tue Aug 19, 2014 4:54 pm    Post subject: Response to Moriah Reply with quote

Moriah wrote:
I have installed the nvidia driver and have at least that much working, as when I boot the new kernel, I can see:

Any suggestions? :?:


Did you get everything working, Moriah?

I have a different monitor setup than you, but I'm having the same problems. For some reason kdm is failing to start for me, although nvidia drivers seem to be loaded properly in the kernel. I get the "no screens found" error when attempting to start the X server and a similar message from xrandr ("can't open display" error). I actually haven't tried setting my computer on a dock during the boot process, so I'll try that when I get home. Maybe, just maybe, it will be able to locate the monitors...
_________________
Lets make Gentoo better together.
Wiki: https://wiki.gentoo.org/wiki/User:Maffblaster
Blog: http://dev.gentoo.org/~maffblaster/
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware All times are GMT
Goto page 1, 2  Next
Page 1 of 2

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum