Since I could not find much information on the web about using the discrete graphics on these systems, I decided to try out a few things on my machine and this howto is a summary of the results. One can use either the proprietary ATI drivers (fglrx) or the opensource radeon drivers to get graphics acceleration. However, I prefer using fglrx, since it offers more features, including nice OpenGL performance (which is expected out of these upper mid-range GPUs) and support for running OpenCL applications. Also, it is neatly done - we get to use the Intel KMS (Kernel Mode Setting) with the Radeon doing the rendering! A word of caution is that I still prefer using the Intel graphics usually since the machine runs hot with fglrx (or radeon) and I feel a slight lag while using fglrx (possibly due to the graphics from radeon being routed through the intel graphics).
To use the opensource radeon drivers, currently the only working way seems to be along the lines of the Bumblebee and Ironhide projects (see [2] and [3]) for Intel/Nvidia hybrid graphics (also muxless, known as Optimus and is anologous to powerexpress) that start a second X server on the discrete graphics card and then use VirtualGL (see [4]) for running (rendering?) OpenGL applications on the second X server while outputting images to the primary X server (running on integrated graphics). (PS: This is possibly an inaccurate lay man summary of things, as is most of this howto
I could not get to use fglrx using VirtualGL and also couldn't get to use open source radeon drivers to drive the primary X server. While I have tested things on my notebook, which is a Sony Vaio S 13.3 (VPCSA3AFX), these methods should work on similar notebooks with the only possible change of PCI Bus ID of the discrete graphics device (as described further along). Both driver methods are now described in some detail.
Method 1: ATI proprietary drivers (fglrx) method:
This method is fairly straight forward. In a nutshell, the system intially boots up with the Intel i915 driver running the display. But the X server is started with the fglrx driver that works in combination with the userspace component of the Intel drivers (xf86-video-intel) to do the actual graphics processing on the discrete GPU (and transferring the images to the Intel GPU, which is in turn the only device connected to the display outputs).
Hence, the first thing we need is a standard Intel graphics software stack - a recent kernel with i915 drivers, the xf86-video-intel package, with version <2.16 (preferably =2.15*) and a recent mesa and libdrm. The kernel configuration and other details are described later while discussing the second method that uses opensource drivers. The significant detail is that the i915 drivers should be preferably built into the kernel image, instead of being modular. (Even if built modular, it should be loaded before fglrx, perhaps using a initramfs. As explained later, the opensource radeon drivers in the kernel, if built, should preferably be modular and disabled at start using the option "radeon.disable=1" in the bootloader kernel commandline). Regarding the xf86-video-intel package, it is important to use a version <2.16, as was fortunately noted in the forum post [5] (by bojojo2020). This is because the current version of ati-drivers, 11.12 doesn't seem to work with newer versions of xf86-video-intel. (I was initially unsuccessful with 2.17, and in my opinion, this is perhaps the only detail due to which someone would fail in getting fglrx running.)
The rest is along the lines of what is described in the forum [5] and the Gentoo Wiki [6]. Here is a short summary. One emerges ati-drivers, preferably the latest 11.12 (as of now):
Code: Select all
# emerge -av ati-driversCode: Select all
# eselect opengl set atiCode: Select all
# aticonfig --initialCode: Select all
## /etc/X11/xorg.conf for fglrx
Section "Device"
Identifier "Configured Video Device"
Driver "fglrx"
BusID "PCI:01:00:0"
EndSection
Section "Monitor"
Identifier "Configured Monitor"
EndSection
Section "Screen"
Identifier "Default Screen"
Monitor "Configured Monitor"
Device "Configured Video Device"
EndSection
Section "ServerLayout"
Identifier "Default Layout"
Option "ignoreABI" "1"
EndSectionCode: Select all
# echo "PCI:$(lspci | grep VGA | grep ATI | awk '{print $1}' | sed 's/\./:/')"Code: Select all
# aticonfig --px-dgpuTo use the Intel graphics subsequently, reverse the changes made earlier. We do
Code: Select all
# aticonfig --px-igpuCode: Select all
# eselect opengl set xorg-x11Code: Select all
## /etc/X11/xorg.conf for intel
Section "Device"
Identifier "Configured Video Device"
Driver "intel"
EndSection
Section "Monitor"
Identifier "Configured Monitor"
EndSection
Section "Screen"
Identifier "Default Screen"
Monitor "Configured Monitor"
Device "Configured Video Device"
EndSection
Section "ServerLayout"
Identifier "Default Layout"
Option "ignoreABI" "1"
EndSectionIt is helpful to write scripts for these various actions, for example [6]. It may also be possible to do a more seamless switching between the integrated and discrete graphics cards as is done in Windows systems. While it may be still required to restart the X server, it would perhaps be possible to use the same xorg.conf. The ATI Catalyst Control Center, (the settings manager for fglrx) has a section for hybrid graphics, which may be useful.
Method 2: Open source drivers (radeon) method:
This method follows exactly along the ideas of the "Bumblebee" [2] and "Ironhide" [3] projects that are meant for similar muxless Intel/Nvidia hybrid graphics system. While these two projects are explicitly meant for Nvidia hybrid graphics, the methods easily extend to ATI hybrid graphics as well. The following is a summary of such an attempt by me on my computer, and also attempts to provide some information about how it works, since I could not find much documentation on the web. The technique is simple, yet novel -- to start a second X server on the discrete graphics card, and then use VirtualGL to run (i.e., render) 3D applications on the secondary X server, while outputting the final images to the primary X server (that runs on the integrated card). See the VirtualGL project at [4] for a fascinating reading.
Here we go about the details.
2.0. Prerequisites:
We need to have pretty recent kernel and X packages. For example, on my system, I have the following relevant packages:
Code: Select all
sys-kernel/helium-sources-3.1.6
sys-kernel/linux-firmware-20110818
x11-drivers/xf86-video-intel-2.15.0-r1 USE="dri"
x11-drivers/xf86-video-ati-6.14.3
x11-base/xorg-server-1.10.4 USE="ipv6 nptl udev xnest xorg xvfb -dmx -doc -kdrive -minimal -static-libs -tslib"
x11-libs/libdrm-9999-r101 USE="libkms -static-libs" VIDEO_CARDS="intel nouveau radeon vmware"
media-libs/mesa-9999-r101 USE="classic d3d egl g3dvl gallium gbm gles1 gles2 llvm nptl openvg shared-glapi vdpau xvmc -bindist -debug -osmesa -pax_kernel -pic (-selinux) -shared-dricore -wayland" VIDEO_CARDS="i915 i965 intel nouveau r100 r200 r300 r600 radeon vmware"Some notes on the above packages:
2.0.1 kernel: For kernel, while I am using "helium-sources", the standard "gentoo-sources" or "vanilla-sources" >= 3.1 should do. The package "linux-firmware" or "radeon-ucode" is required for supplying firmware for the latest radeon graphics cards. The appropriate firmware files should also be referenced in the kernel config as explained further. In the kernel configuration, select "i915" driver (for intel graphics) to be built into the kernel, and the "radeon" driver to be modular:
Code: Select all
CONFIG_DRM_I915=y
CONFIG_DRM_RADEON=mCode: Select all
CONFIG_VGA_SWITCHEROO=y
CONFIG_DEBUG_FS=yFor referencing the appropriate firmware files, the following kernel config options need to be set as below:
Code: Select all
CONFIG_FIRMWARE_IN_KERNEL=y
CONFIG_EXTRA_FIRMWARE="radeon/BARTS_mc.bin radeon/BARTS_me.bin radeon/BARTS_pfp.bin radeon/BTC_rlc.bin radeon/CAICOS_mc.bin radeon/CAICOS_me.bin radeon/CAICOS_pfp.bin radeon/CAYMAN_mc.bin radeon/CAYMAN_me.bin radeon/CAYMAN_pfp.bin radeon/CAYMAN_rlc.bin radeon/CEDAR_me.bin radeon/CEDAR_pfp.bin radeon/CEDAR_rlc.bin radeon/CYPRESS_me.bin radeon/CYPRESS_pfp.bin radeon/CYPRESS_rlc.bin radeon/JUNIPER_me.bin radeon/JUNIPER_pfp.bin radeon/JUNIPER_rlc.bin radeon/PALM_me.bin radeon/PALM_pfp.bin radeon/R600_rlc.bin radeon/R700_rlc.bin radeon/REDWOOD_me.bin radeon/REDWOOD_pfp.bin radeon/REDWOOD_rlc.bin radeon/SUMO2_me.bin radeon/SUMO2_pfp.bin radeon/SUMO_me.bin radeon/SUMO_pfp.bin radeon/SUMO_rlc.bin radeon/TURKS_mc.bin radeon/TURKS_me.bin radeon/TURKS_pfp.bin"
CONFIG_EXTRA_FIRMWARE_DIR="/lib/firmware/"Code: Select all
# emerge -pv linux-firmware
# for filename in $(ls /lib/firmware/radeon/); do if [ ! -e "/usr/src/linux/firmware/radeon/${filename}" ]; then echo -ne " radeon/${filename}"; fi; done; echo " "
Blacklist the "radeon" kernel driver module in /etc/modprobe.d/blacklist.conf by adding the line:
Code: Select all
blacklist radeonCode: Select all
## /boot/grub2/grub.cfg
menuentry "Linux" {
insmod ext2
set root=(hd0,5)
linux /boot/vmlinuz root=/dev/sda5 rootfstype=ext4 \
splash=silent,theme:BlueCurls CONSOLE=/dev/tty1 quiet logo.nologo \
video=uvesafb:off video=vesafb:off \
modprobe.blacklist=fglrx,radeon ikms=1 xdriver=xorg \
i915.i915_enable_rc6=1 i915.i915_enable_fbc=1 i915.lvds_downclock=1 \
pcie_aspm=force
initrd /boot/initramfs-nonlive.igz
}2.0.2 X and mesa: The "xf86-video-intel", "xf86-video-ati" and "xorg-server" package versions on my system are currently in the stable portage tree. If not planning to use fglrx, it is better to use the newer xf86-video-intel packages, say 2.17, with USE="dri sna" -- the performance is better, uses the newer SNA (sandybridge) driver architecture, and power management also seems to be better. For the "mesa" and "libdrm" packages, while the stable (or ~arch) versions in portage should suffice, it is better to get the git versions, either by unmasking the corresponding 9999 packages in portage or even better by using the x11-overlay versions.
At this stage it is expected that one has the integrated Intel graphics fully functioning with proper 2D and 3D graphics acceleration (see the output of "glxinfo") and power management.
On some systems, for example on my Sony Vaio, the discrete graphics card is powered on by default and keeps consuming power, which apart from increasing battery usage, also raises the system temperature. It can be powered down when not in use (and I would highly recommend to do so) by loading the radeon module and using the debugfs interface of vga_switcheroo as follows:
Code: Select all
# modprobe radeon
# echo OFF > /sys/kernel/debug/vgaswitcheroo/switchCode: Select all
# emerge -pv acpi_call
# modprobe acpi_call
# echo "\_SB.PCI0.PEG0.PEGP._OFF" > /proc/acpi/call2.1. Running second X server on the discrete GPU:
Create a seperate X config file, say /etc/X11/xorg.conf.pxp and explicitly specify the bus ID of the discrete GPU in the device section:
Code: Select all
## /etc/X11/xorg.conf.pxp
Section "Device"
Identifier "Device0"
Driver "radeon"
BusID "PCI:01:00:0"
EndSection
Section "Monitor"
Identifier "Monitor0"
EndSection
Section "Screen"
Identifier "Screen0"
Monitor "Monitor0"
Device "Device0"
EndSectionBefore starting the X server, make sure that the radeon driver is loaded and the discrete graphics card is powered on:
Code: Select all
# if [ -z "$(lsmod | grep -i radeon)" ]; then modprobe radeon; fi
# cat /sys/kernel/debug/vgaswitcheroo/switch
0:IGD:+:Pwr:0000:00:02.0
1:DIS: :[b]Off[/b]:0000:01:00.0
# echo ON > /sys/kernel/debug/vgaswitcheroo/switch
# cat /sys/kernel/debug/vgaswitcheroo/switch
0:IGD:+:Pwr:0000:00:02.0
1:DIS: :[b]Pwr[/b]:0000:01:00.0
Start the second X server by using the X config created before:
Code: Select all
# X -ac -config /etc/X11/xorg.conf.powerexpress -sharevts -nolisten tcp -noreset :8 vt9Code: Select all
X.Org X Server 1.10.4
Release Date: 2011-08-19
X Protocol Version 11, Revision 0
Build Operating System: Linux 3.1.6-helium x86_64 Gentoo
Current Operating System: Linux Shadow 3.1.6-helium #1 SMP Fri Dec 30 02:03:48 PST 2011 x86_64
Kernel command line: BOOT_IMAGE=/boot/vmlinuz root=/dev/sda5 rootfstype=ext4 splash=silent,theme:BlueCurls CONSOLE=/dev/tty1 quiet logo.nologo video=uvesafb:off video=vesafb:off modprobe.blacklist=nvidia,fglrx ikms=1 xdriver=xorg i915.i915_enable_rc6=1 i915.i915_enable_fbc=1 i915.lvds_downclock=1 pcie_aspm=force
Build Date: 28 December 2011 04:36:04PM
Current version of pixman: 0.24.0
Before reporting problems, check http://wiki.x.org
to make sure that you have the latest version.
Markers: (--) probed, (**) from config file, (==) default setting,
(++) from command line, (!!) notice, (II) informational,
(WW) warning, (EE) error, (NI) not implemented, (??) unknown.
(==) Log file: "/var/log/Xorg.8.log", Time: Tue Jan 17 22:39:00 2012
(++) Using config file: "/etc/X11/xorg.conf.pxp"
(==) Using config directory: "/etc/X11/xorg.conf.d"
(==) Using system config directory "/usr/share/X11/xorg.conf.d"
(II) [KMS] Kernel modesetting enabled.2.3. Run a 3D application using VirtualGL:
This is the second key ingredient and the final step. Emerge the package "virtualgl" by using the ebuild from [7] or [8]:
Code: Select all
# emerge virtualglCode: Select all
# vglrun -d :0 glxinfo | head -n 30
name of display: :0
display: :0 screen: 0
direct rendering: Yes
server glx vendor string: VirtualGL
server glx version string: 1.4
server glx extensions:
GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_visual_info,
GLX_EXT_visual_rating, GLX_SGI_make_current_read, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer, GLX_SUN_get_transparent_index, GLX_ARB_create_context
client glx vendor string: VirtualGL
client glx version string: 1.4
client glx extensions:
GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_visual_info,
GLX_EXT_visual_rating, GLX_SGI_make_current_read, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer, GLX_SUN_get_transparent_index, GLX_ARB_create_context
GLX version: 1.4
GLX extensions:
GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_visual_info,
GLX_EXT_visual_rating, GLX_SGI_make_current_read, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer, GLX_SUN_get_transparent_index, GLX_ARB_create_context
OpenGL vendor string: X.Org
OpenGL renderer string: Gallium 0.4 on AMD TURKS
OpenGL version string: 2.1 Mesa 7.12-devel (git-11cdf24)
OpenGL shading language version string: 1.20
OpenGL extensions:
GL_ARB_multisample, GL_EXT_abgr, GL_EXT_bgra, GL_EXT_blend_color,
GL_EXT_blend_minmax, GL_EXT_blend_subtract, GL_EXT_copy_texture,
GL_EXT_polygon_offset, GL_EXT_subtexture, GL_EXT_texture_object,
GL_EXT_vertex_array, GL_EXT_compiled_vertex_array, GL_EXT_texture,
GL_EXT_texture3D, GL_IBM_rasterpos_clip, GL_ARB_point_parameters,When done with running the 3D application, the X server can be killed (by pressing ctrl+c in the shell running it) and the discrete GPU can be powered down:
Code: Select all
# echo OFF > /sys/kernel/debug/vgaswitcheroo/switch
# cat /sys/kernel/debug/vgaswitcheroo/switch
0:IGD:+:Pwr:0000:00:02.0
1:DIS: :[b]Off[/b]:0000:01:00.0The whole process can be scripted for automatically powering on the GPU and starting a X server for running a 3D application, and cleaning up afterwards. The bumblebee project offers such scripts like "optirun" for Intel/Nvidia systems. One may reuse most of such scripts with minor changes. One such crude script is shown below:
Code: Select all
#!/bin/bash
#
# /usr/local/bin/pxp_run.bash
#
VGL_DISP=":8"
PROG_NAME="$(basename $0)"
PROG_NAME="${PROG_NAME%%.*}"
if [ $# -lt 1 ]; then
echo -ne "\n Usage: $(basename $0) <X-application>\n"
echo -e " Example: $(basename $0) glxgears\n"
exit
fi
echo -ne " ${PROG_NAME}: Enabling Discrete graphics card ... "
echo ON > /sys/kernel/debug/vgaswitcheroo/switch
echo "done"
echo -ne " ${PROG_NAME}: Starting X server on ${VGL_DISP} ... "
X -ac -config /etc/X11/xorg.conf.pxp -quiet -sharevts \
-nolisten tcp -noreset ${VGL_DISP} vt9 &
XPID="$!"
disown $XPID
echo "done"
echo -e " ${PROG_NAME}: Running X-application using VirtualGL ... \n"
vglrun -d ${VGL_DISP} $@
echo -e "\n ${PROG_NAME}: X-application stopped"
echo -ne " ${PROG_NAME}: Shutting down X server ... "
sleep 1
kill -9 ${XPID} > /dev/null 2>&1
sleep 1
echo "done"
echo -ne " ${PROG_NAME}: Disabling Discrete graphics card ... "
echo OFF > /sys/kernel/debug/vgaswitcheroo/switch
echo "done" Useful links:
[1] Linux hybrid graphics at Launchpad.net: https://launchpad.net/~hybrid-graphics-linux
[2] Bumblebee project: https://launchpad.net/~mj-casalogic/+archive/bumblebee/
[3] Ironhide project: https://launchpad.net/~mj-casalogic/+archive/ironhide/
[4] VirtualGL: http://sourceforge.net/projects/virtualgl/
[5] A Gentoo Forums thread on muxless Intel/ATI hybrid graphics: http://forums.gentoo.org/viewtopic-t-881115.html
[6] A Gentoo wiki on hybrid radeon graphics: http://en.gentoo-wiki.com/wiki/Fglrx-hybrid-graphics
[7] Ebuild for VirualGL: https://github.com/speckins/usr-local-portage/tree/master/x11-misc/virtualgl
[8] Alternate link for VirtalGL ebuild: http://hirakendu.mooo.com/gentoo/gentoo-2010/config-scripts/portage-updates/x11-misc/virtualgl/
[9] Gentoo bug request for Bumblee ebuild: https://bugs.gentoo.org/show_bug.cgi?id=384083



