View previous topic :: View next topic |
Author |
Message |
Spoony Tux's lil' helper
Joined: 18 Feb 2004 Posts: 99 Location: Washington DC
|
Posted: Thu Feb 09, 2006 9:32 pm Post subject: |
|
|
I'm still having issues with it locking up my computer (completely) after about 2 minutes.
I'm guessing it's either a memory leak or CPU race condition....associated with fglrx maybe?
Does anyone who is using fglrx (ati-drivers 8.21.7) experience this issue?
I know it's beta software, but damn it's pretty and I want it _________________ Regards,
Mike Sponsler
msponsler at gmail.com |
|
Back to top |
|
|
Mikos Tux's lil' helper
Joined: 04 Feb 2004 Posts: 91 Location: Prague, Czech Republic
|
Posted: Thu Feb 09, 2006 9:44 pm Post subject: |
|
|
PyroBoy101: KDE 4 will be able to use Xgl too. And Plasma is really great design (you can have many layers on desktop - for example wallpaper (it can be even animated wallpaper from E17), then same water like raindrop effects, then animated SVG icons, then desklets/plasmoids. Panels will be also only plasmoids, nothing different).
Qt4 have vector-based Arthur paint-engine, which can use OpenGL (similar to Cairo + Glitz), PDF, etc. as output. KDE4 will be really great system... _________________ o Athlon-XP 2600+ (512kB cache), 512MB DDR400 RAM, GeForce FX-5600 128MB
o using Arch Linux now, but still love Gentoo
Last edited by Mikos on Thu Feb 09, 2006 9:45 pm; edited 1 time in total |
|
Back to top |
|
|
Rayno n00b
Joined: 08 Feb 2006 Posts: 11
|
Posted: Thu Feb 09, 2006 9:45 pm Post subject: |
|
|
pijalu wrote: | for the fun, a patch to enable switcher plugin in compiz:
|
Cool. Is it at all stable? If so, I wonder why they didn't have it being built. |
|
Back to top |
|
|
stalynx Apprentice
Joined: 03 Oct 2002 Posts: 162
|
Posted: Thu Feb 09, 2006 10:14 pm Post subject: |
|
|
Let's do some reporting here.
Code: | fglrx(0): Chipset: "MOBILITY RADEON 9000 (M9 4C66)" (Chipset = 0x4c66)
(II) fglrx(0): Kernel Module Version Information:
(II) fglrx(0): Name: fglrx
(II) fglrx(0): Version: 8.21.7
(II) fglrx(0): Date: Jan 14 2006
(II) fglrx(0): Desc: ATI FireGL DRM kernel module
(II) fglrx(0): Kernel Module version matches driver.
(II) fglrx(0): Kernel Module Build Time Information:
(II) fglrx(0): Build-Kernel UTS_RELEASE: 2.6.15-nitro3
|
Using Hanno's xgl-overlay-200602093
Xgl: Built and Works.
Mesa-6.4.3_alpha20060209: Built and Works.
Compiz: Built but Does not Work. Results in screen distortions.
XGL Performance: Good
Compiz Performance: NA |
|
Back to top |
|
|
Rayno n00b
Joined: 08 Feb 2006 Posts: 11
|
Posted: Thu Feb 09, 2006 10:17 pm Post subject: |
|
|
Hey stalynx, have you tried using the latest overlay from http://dev.gentoo.org/~hanno/ ? That one worked for me without any extra patching. |
|
Back to top |
|
|
dmsnell Tux's lil' helper
Joined: 04 Oct 2005 Posts: 79
|
Posted: Thu Feb 09, 2006 10:36 pm Post subject: |
|
|
Well, the error now is (when starting compiz )
Code: |
compiz: glXBindTexImageEXT is missing
compiz: Failed to manage screen: 0
compiz: No managable screens found on display :1.0
|
This is the first confirmed missing dependancy because it isn't in glxinfo |
|
Back to top |
|
|
stalynx Apprentice
Joined: 03 Oct 2002 Posts: 162
|
Posted: Thu Feb 09, 2006 10:38 pm Post subject: |
|
|
Rayno wrote: | Hey stalynx, have you tried using the latest overlay from http://dev.gentoo.org/~hanno/ ? That one worked for me without any extra patching. |
Nah I tried but I still get this craziness. |
|
Back to top |
|
|
Tyler_Durden Apprentice
Joined: 27 Jul 2004 Posts: 189 Location: Germany
|
Posted: Thu Feb 09, 2006 10:50 pm Post subject: |
|
|
stalynx wrote: | Let's do some reporting here. |
ok:
Code: | X Window System Version 7.0.0
Release Date: 21 December 2005
X Protocol Version 11, Revision 0, Release 7.0
Build Operating System:Linux 2.6.15-nitro3 x86_64
Current Operating System: Linux HAL9000 2.6.15-nitro3 #1 PREEMPT Sun Jan 15 20:16:30 CET 2006 x86_64
(II) NVIDIA dlloader X Driver 1.0-8178 Wed Dec 14 16:59:38 PST 2005
(II) NVIDIA Unified Driver for all Supported NVIDIA GPUs
(--) Chipset NVIDIA GPU found
(II) NVIDIA(0): NVIDIA GPU detected as: GeForce 7800 GT
(--) NVIDIA(0): VideoBIOS: 05.70.02.13.00
(II) NVIDIA(0): Detected PCI Express Link width: 16X
(--) NVIDIA(0): VideoRAM: 262144 kBytes
|
Using Hanno's xgl-overlay-200602094
Xgl: Built and Works.
Mesa-6.4.3_alpha20060209: Built and Works.
Compiz: Built and Works
XGL Performance: Great
Compiz Performance: Great, Poor under Load
Now waiting for a new compiz that will build & install it's already included kde-window-decorator! _________________ Gentoo 17.1 x86_64
Intel Core i9-9900K
Asus MAXIMUS XI HERO
AMD Radeon 6800XT
64GB DDR4
Samsung SSD 970 EVO Plus 1TB
8x Seagate Archive (SATA-RAID 64TB)
Digital Devices Cine S2 V6.5 DVB Adapter |
|
Back to top |
|
|
tchak Tux's lil' helper
Joined: 19 Aug 2003 Posts: 124 Location: France/Russia
|
Posted: Thu Feb 09, 2006 10:55 pm Post subject: |
|
|
hi
I have an ATI card with Drivers 8.21.7.
I build all the stuff without problems but I cant start Xgl with direct rendering enabled.
I can start it only on DISPLAY=:1. And If I do so, I do not have direct rendering. On DISPLAY=:0 it just hangs at start up...
Any suggestions? _________________ maybe this world is another planet's hell...
Athlon64 | ATI Mobility Radeon 9700 | Netgear (Prism54) + ASUS (USB2 ZD1211) |
|
Back to top |
|
|
ackward Apprentice
Joined: 06 Sep 2002 Posts: 192
|
Posted: Thu Feb 09, 2006 11:03 pm Post subject: |
|
|
how about a good howto, briefing all the scattered posts?
http://gentoo-wiki.com/Main_Page could be a goot start. |
|
Back to top |
|
|
bedazzled n00b
Joined: 21 May 2005 Posts: 39 Location: Athens, Greece
|
Posted: Thu Feb 09, 2006 11:07 pm Post subject: |
|
|
Nihilus wrote: | bedazzled wrote: | Nihilus wrote: | Should it really eat CPU since it's hw accelerated? :-/ What kinda gfx-card do you have? |
/usr/share/doc/nvidia-glx-1.0.8178/README.gz wrote: | DISABLING CPU-SPECIFIC FEATURES
Setting the environment variable __GL_FORCE_GENERIC_CPU to a non-zero value
will inhibit the use of CPU-specific features such as MMX, SSE, or 3DNOW!. Use
of this option may result in performance loss. This option may be useful in
conjunction with software such as the Valgrind memory debugger.
|
Maybe makes senses now?
The cpu optimizations, MUST be removed ASAP from the drivers. They are pretty useless.
They were useful a long ago, when we had no GPUs (pre-GeForce/Radeon era).
When I run glxgears in nVidia 6600 GT+closed drivers, i get ~99% cpu usage and on a Radeon 9000 with open drivers about 50-60%.
Anyway, kudos to all Xgl testers, I don't have much time for experimenting currently. |
Admit it: You have no clue what you are talking about. I'm not a moron, not even the average-Joe... I'm in the 5th procentile among computer users. I guess that means I know how to use google. I know how to RTFM. Last nite in fact I went thru that document. This option is mainly intended for debugging calls to the driver. Not using SIMD instructions would imply more load on the CPU btw... Go google on SIMD
Me myself remember the pre-Geforce epoch (before 98-99). It was kinda dull... Well, however the point is there was no SIMD instructions back then on the x86. EDIT: I guess we had MMX. But certainly not the SSE instruction set. |
You certainly have no clue pal.
3DNow! was introduced in 1998. The first GPU (GeForce 256) was introduced in 1999, but it was very expensive at that time to become mainstream.
GPUs did a breakthrough in matrix multiplications (Transform & Lighting*), until then the CPU had to accomplish this tedious task via the FPU and later on, via SIMD instructions.
Remember when Intel launched Pentium III, demonstrating its "3D" capabilities?
The conclusion it that we don't need the CPU to assist 3D anymore (they are really "weak" compared to GPUs, regardless their clock rate), unless you want to drag a window in Xgl and have 100% cpu usage. But you don't want to, do you?
Concerning MMX, you are COMPLETELY IGNORANT.
MMX handles ONLY 64-bit integers, it doesn't help at all in 3D. (because we need floating point values, aka 3DNow!/SSE/SSE2 etc.)
So, please spare me and next time, do your homework before insulting.
(*) http://en.wikipedia.org/wiki/Transform_and_lighting |
|
Back to top |
|
|
Nihilus Tux's lil' helper
Joined: 23 Mar 2005 Posts: 80
|
Posted: Thu Feb 09, 2006 11:16 pm Post subject: |
|
|
I'm so not going there. _________________ Writing lots of posts doesn't necessarily make you a Guru; might prove that you're just too stupid to go figure |
|
Back to top |
|
|
ianegg Apprentice
Joined: 26 Oct 2005 Posts: 279 Location: Breakfast.
|
Posted: Thu Feb 09, 2006 11:23 pm Post subject: |
|
|
This really isn't the place for an argument bedazzled. However what you're saying goes againt what nVidia's readme says, and against common sense.
The reason Xgl and glxcompmgr/compiz eat so many CPU cycles in the first place is because a lot of the work is currently done with Mesa, ie. in software. |
|
Back to top |
|
|
oggialli Guru
Joined: 27 May 2004 Posts: 389 Location: Finland, near Tampere
|
Posted: Thu Feb 09, 2006 11:24 pm Post subject: |
|
|
Actually, to talk the truth, both of you are getting misleaded here. The alleged "need to be removed bad CPU optimizations" are there to help the performance A LOT. They don't (of course) make the drawing use any more CPU. What both of you aren't getting is that the CPU does a lot more than what the GPU can - even still. It's like a "controller" for the GPU.
Also about open source "being better cos it doesn't have those bad bad optimizations" no, open source drivers do have the same kind of (very useful) SIMD optimizations.
But, anyway, I'm not really into discussing this matter in this topic with people who are obviously very ignorant of anything. Neither should anyone else. Go troll in some HardOCP/Rage3D-like forums. _________________ IBM Thinkpad T42P - Gentoo Linux |
|
Back to top |
|
|
bedazzled n00b
Joined: 21 May 2005 Posts: 39 Location: Athens, Greece
|
Posted: Thu Feb 09, 2006 11:24 pm Post subject: |
|
|
Nihilus wrote: | MESA is sick shit! It's not accelerated really, since it's sw-emulation-layer for OGL... Real OGL is handled by hw like the SGIs, nvidia, ATI-chipsets...
When you link applications against the provided headers/libs, by SGI, nvidia,ATI etc, what you really do is letting the hw handling the calls by pointing at memory adresses IIRC... |
Oh boy, I should have noticed it earlier that you are talking crap.
FYI, my laptop plays just fine with "sick shit sw emulated" MESA+open-source drivers and guess what, it actually is hw accelerated.
I rest my case. |
|
Back to top |
|
|
Nihilus Tux's lil' helper
Joined: 23 Mar 2005 Posts: 80
|
Posted: Thu Feb 09, 2006 11:29 pm Post subject: |
|
|
Well, still I'm so not going there. _________________ Writing lots of posts doesn't necessarily make you a Guru; might prove that you're just too stupid to go figure |
|
Back to top |
|
|
bedazzled n00b
Joined: 21 May 2005 Posts: 39 Location: Athens, Greece
|
Posted: Thu Feb 09, 2006 11:54 pm Post subject: |
|
|
oggialli wrote: | Actually, to talk the truth, both of you are getting misleaded here. The alleged "need to be removed bad CPU optimizations" are there to help the performance A LOT. They don't (of course) make the drawing use any more CPU. What both of you aren't getting is that the CPU does a lot more than what the GPU can - even still. It's like a "controller" for the GPU.
Also about open source "being better cos it doesn't have those bad bad optimizations" no, open source drivers do have the same kind of (very useful) SIMD optimizations.
But, anyway, I'm not really into discussing this matter in this topic with people who are obviously very ignorant of anything. Neither should anyone else. Go troll in some HardOCP/Rage3D-like forums. |
I don't see any arguments at all, only insults, which is pretty sad...
Maybe you should have a look here -> http://www.gpgpu.org |
|
Back to top |
|
|
bedazzled n00b
Joined: 21 May 2005 Posts: 39 Location: Athens, Greece
|
Posted: Thu Feb 09, 2006 11:59 pm Post subject: |
|
|
ianegg wrote: | This really isn't the place for an argument bedazzled. However what you're saying goes againt what nVidia's readme says, and against common sense. |
Read again what the README says.
ianegg wrote: | The reason Xgl and glxcompmgr/compiz eat so many CPU cycles in the first place is because a lot of the work is currently done with Mesa, ie. in software. |
Plain wrong. MESA calls nVidia/ATI/whatever libglx.so, it is hw accelerated. |
|
Back to top |
|
|
pijalu Guru
Joined: 04 Oct 2004 Posts: 365
|
Posted: Fri Feb 10, 2006 12:10 am Post subject: |
|
|
Haystack wrote: | I'm following Rayno's steps (Posted: Wed Feb 08, 2006 2:27 pm) with the overlay from pijalu (Posted: Thu Feb 09, 2006 12:04 pm).
After I editted the patches I ran "emerge xgl", but that failed because of a missing /usr/lib/opengl/xorg-x11/lib/libGL.la
What's gone wrong? I'd really like to check out that neat cube etc.
Thanks
--edit:
Doh! I'm too excited for this ... forgot to ebuild xgl-20060209.ebuild before emerging.... (don't know yet if that solves the problem) AND forgot to sacrafice the goat |
Heu... normally the with compiz overlay i put don't need repatching
(especially if you take the last version and compile mesa 6.4.3alpha20060209-r1 compiz-0.1-r1 and xgl-20060209-r1 coz they use a snapshot of the cvs i compiled -- but please fall back to hanno one, mine are just fast hack to have xgl running for more people ) |
|
Back to top |
|
|
enzobelmont Guru
Joined: 06 Apr 2004 Posts: 345 Location: Chiapas, Mexico
|
Posted: Fri Feb 10, 2006 12:11 am Post subject: |
|
|
Code: | bash-2.05b# glxcompmgr wobbly shadow
glxcompmgr: GLX_MESA_render_texture is missing
glxcompmgr: Failed to manage screen: 0
glxcompmgr: No managable screens found on display :1.0
bash-2.05b#
|
using last overlay from http://dev.gentoo.org/~hanno/
xgl running fine
compiz same error output. |
|
Back to top |
|
|
maxcow Tux's lil' helper
Joined: 04 Jul 2003 Posts: 126
|
Posted: Fri Feb 10, 2006 12:24 am Post subject: |
|
|
Is anyone still having the infamous black windows problem?
I got the new overlay from hanno boeck, and followed all the steps, Xgl runs fine but still when I try to run compiz, the windows turn black, and it outputs something like this:
Code: | compiz: pixmap 0x2000d3 can't be bound to texture
compiz: Couldn't bind redirected window 0x40001b to texture |
I went looking in the compiz code and found that this is where it's coming from:
Code: | screen->queryDrawable (...) // calls glXQueryDrawable
switch (target) {
case GLX_TEXTURE_2D_EXT:
/* some code */
case GLX_TEXTURE_RECTANGLE_EXT:
/* some more code */
case GLX_NO_TEXTURE_EXT:
/* prints the error message!! */
}
|
So, it looks like my system doesn't support these extensions?
Yet, running glitzinfo on Xgl outputs:
Code: | texture rectangle: Yes
texture non power of two: No
texture mirrored repeat: Yes
texture border clamp: No
(..) |
And glxinfo on Xgl outputs among others: GL_EXT_texture_rectangle
I have an nVidia Geforce 4 MX, running proprietary nvidia drivers, of course..
So, to recap, am I alone in this misfortune? |
|
Back to top |
|
|
Nihilus Tux's lil' helper
Joined: 23 Mar 2005 Posts: 80
|
Posted: Fri Feb 10, 2006 12:25 am Post subject: |
|
|
bedazzled wrote: | ianegg wrote: | This really isn't the place for an argument bedazzled. However what you're saying goes againt what nVidia's readme says, and against common sense. |
Read again what the README says.
ianegg wrote: | The reason Xgl and glxcompmgr/compiz eat so many CPU cycles in the first place is because a lot of the work is currently done with Mesa, ie. in software. |
Plain wrong. MESA calls nVidia/ATI/whatever libglx.so, it is hw accelerated. |
From: http://www.mesa3d.org/faq.html
Quote: |
1.2 Does Mesa support/use graphics hardware?
Yes. Specifically, Mesa serves as the OpenGL core for the open-source DRI drivers for XFree86/X.org. See the DRI website for more information.
There have been other hardware drivers for Mesa over the years (such as the 3Dfx Glide/Voodoo driver, an old S3 driver, etc) but the DRI drivers are the modern ones. |
Quote: | 1.4 What's the difference between"Stand-Alone" Mesa and the DRI drivers?
Stand-alone Mesa is the original incarnation of Mesa. On systems running the X Window System it does all its rendering through the Xlib API:
* The GLX API is supported, but it's really just an emulation of the real thing.
* The GLX wire protocol is not supported and there's no OpenGL extension loaded by the X server.
* There is no hardware acceleration.
* The OpenGL library, libGL.so, contains everything (the programming API, the GLX functions and all the rendering code).
|
So I'm guessed I'm pretty fucked with my non-DRI nvidia-card ;-P _________________ Writing lots of posts doesn't necessarily make you a Guru; might prove that you're just too stupid to go figure
Last edited by Nihilus on Fri Feb 10, 2006 12:26 am; edited 1 time in total |
|
Back to top |
|
|
pijalu Guru
Joined: 04 Oct 2004 Posts: 365
|
Posted: Fri Feb 10, 2006 12:25 am Post subject: |
|
|
Rayno wrote: | pijalu wrote: | for the fun, a patch to enable switcher plugin in compiz:
|
Cool. Is it at all stable? If so, I wonder why they didn't have it being built. |
It's using GTK and it's... hum... nice but not like it's hype (eg: far from the flashy vista one) - any way with the former-know-as-expose plugin, alt+tab is just old school
screenshoot : Switcher versus Scale |
|
Back to top |
|
|
ianegg Apprentice
Joined: 26 Oct 2005 Posts: 279 Location: Breakfast.
|
Posted: Fri Feb 10, 2006 12:26 am Post subject: |
|
|
bedazzled wrote: | ianegg wrote: | This really isn't the place for an argument bedazzled. However what you're saying goes againt what nVidia's readme says, and against common sense. |
Read again what the README says. |
"Use of this option may result in performance loss."
bedazzled wrote: | ianegg wrote: | The reason Xgl and glxcompmgr/compiz eat so many CPU cycles in the first place is because a lot of the work is currently done with Mesa, ie. in software. |
Plain wrong. MESA calls nVidia/ATI/whatever libglx.so, it is hw accelerated. |
http://www.mesa3d.org/faq.html
Please take this discussion somewhere else. Start a new thread in Off the Wall, perhaps? |
|
Back to top |
|
|
oggialli Guru
Joined: 27 May 2004 Posts: 389 Location: Finland, near Tampere
|
Posted: Fri Feb 10, 2006 12:31 am Post subject: |
|
|
bedazzled wrote: | oggialli wrote: | Actually, to talk the truth, both of you are getting misleaded here. The alleged "need to be removed bad CPU optimizations" are there to help the performance A LOT. They don't (of course) make the drawing use any more CPU. What both of you aren't getting is that the CPU does a lot more than what the GPU can - even still. It's like a "controller" for the GPU.
Also about open source "being better cos it doesn't have those bad bad optimizations" no, open source drivers do have the same kind of (very useful) SIMD optimizations.
But, anyway, I'm not really into discussing this matter in this topic with people who are obviously very ignorant of anything. Neither should anyone else. Go troll in some HardOCP/Rage3D-like forums. |
I don't see any arguments at all, only insults, which is pretty sad...
Maybe you should have a look here -> http://www.gpgpu.org |
Is it possible we have trolls on the more "serious" topics here nowadays too?
Or can someone really be so daft as to take something like gpgpu to this...
THE GPU CAN NOT RUN THE WHOLE PROGRAM CONTROL FLOW. THE CPU HAS TO DO THAT. RUNNING SCIENTIFIC CALCULATIONS (AND SEVERELY LIMITED CASES AT THAT TOO) ON THE GPU HAS NOTHING TO DO WITH THIS. THE CONTROL FLOW IS STILL RUN ON THE MAIN CPU. FULL STOP. AWAITING ACK.
Still, if you can't realize the facts, do it elsewhere. This topic isn't the kindergarten or the school science club where you can freely guess how stuff really works. This topic is meant for people to share their experiences and problems with Xgl. Ie. Not a place for people with such an obvious lack of knowledge in modern 3D graphics work flow to argue about their assumptions. _________________ IBM Thinkpad T42P - Gentoo Linux |
|
Back to top |
|
|
|