Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
The Ghost of CPU Future
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Off the Wall
View previous topic :: View next topic  
Author Message
pjp
Administrator
Administrator


Joined: 16 Apr 2002
Posts: 16116
Location: Colorado

PostPosted: Thu Oct 04, 2012 12:33 am    Post subject: The Ghost of CPU Future Reply with quote

Intel CPU Prices Stagnate As AMD Sales Decline
Quote:
Three years of historical data shows that Intel CPU prices have remained stagnant, especially for models that cost $200 and up. AMD chips, on the other hand, tend to fall in price steadily after they first hit the market.


I just wish GPUs were useful for something other than games or accelerated desktop eye-candy. Though I am looking to build a Trinity HTPC of sorts.
_________________
lolgov. 'cause where we're going, you don't have civil liberties.

In Loving Memory
1787 - 2008
Back to top
View user's profile Send private message
mdeininger
Veteran
Veteran


Joined: 15 Jun 2005
Posts: 1740
Location: Emerald Isles, overlooking Dublin's docklands

PostPosted: Thu Oct 04, 2012 3:04 am    Post subject: Reply with quote

They do have additional uses.

The problem is that most developers are unable to utilise GPUs for their tasks, simply because they lack the necessary skills. Or toolsets.

Hell I used to work for a company that primarily does image processing and they couldn't figure out how to use CUDA for it even though it'd have been a perfect fit, simply because they were stuck using Borland.
_________________
"Confident, lazy, cocky, dead." -- Felix Jongleur, Otherland

( Twitter | Blog | GitHub )
Back to top
View user's profile Send private message
wildhorse
Tux's lil' helper
Tux's lil' helper


Joined: 16 Mar 2006
Posts: 148
Location: Estados Unidos De América

PostPosted: Thu Oct 04, 2012 6:16 am    Post subject: Reply with quote

Apropos tools: IMHO the days of CUDA, OpenCL and that stupid pseudo-Assembler PTX are numbered. Since NVIDIA finally released details about the internals of their GPUs and even threw a real Assembler (one that represents the actual architectures) and everything else that comes with CUDA at the LLVM community, I expect to see a merger of CPU and GPU through one (1) compiler, all controlled by the kernel of the operating system. The question is how long it takes for AMD, ARM and Intel to figure that out.
I am sure Microsoft is already working on a plan to assimilate all the hard work of the open source community.
No chance for Borland developers, though. :lol:

A Trinity HTPC of sorts calls for a five-year plan. :P
Back to top
View user's profile Send private message
mdeininger
Veteran
Veteran


Joined: 15 Jun 2005
Posts: 1740
Location: Emerald Isles, overlooking Dublin's docklands

PostPosted: Thu Oct 04, 2012 6:59 am    Post subject: Reply with quote

That's rather good to know :D. And yeah, I can't remember how often I told them to switch. They thought about maybe -MAYBE- switching to MSVC++ with Qt, but SOMEONE on the team is rather adamant about the layout being precisely the way he wants it to be. Down to the pixel.

Probably going to see something like that from MS too though. If the specs are public, there's no real reason for them not to.
_________________
"Confident, lazy, cocky, dead." -- Felix Jongleur, Otherland

( Twitter | Blog | GitHub )
Back to top
View user's profile Send private message
pjp
Administrator
Administrator


Joined: 16 Apr 2002
Posts: 16116
Location: Colorado

PostPosted: Fri Oct 05, 2012 1:34 am    Post subject: Reply with quote

mdeininger wrote:
They do have additional uses.

The problem is that most developers are unable to utilise GPUs for their tasks
I thought it was apparent that I wasn't referring to development.
_________________
lolgov. 'cause where we're going, you don't have civil liberties.

In Loving Memory
1787 - 2008
Back to top
View user's profile Send private message
mdeininger
Veteran
Veteran


Joined: 15 Jun 2005
Posts: 1740
Location: Emerald Isles, overlooking Dublin's docklands

PostPosted: Fri Oct 05, 2012 3:52 pm    Post subject: Reply with quote

pjp wrote:
mdeininger wrote:
They do have additional uses.

The problem is that most developers are unable to utilise GPUs for their tasks
I thought it was apparent that I wasn't referring to development.
Some nice tools that are able to use GPUs do exist already. I've seen betas of some video transcoders and the like. I wouldn't count those as development-related. Sadly that's not too many of them, which is why I pointed out how much most software developers suck at their job.
_________________
"Confident, lazy, cocky, dead." -- Felix Jongleur, Otherland

( Twitter | Blog | GitHub )
Back to top
View user's profile Send private message
dE_logics
Advocate
Advocate


Joined: 02 Jan 2009
Posts: 2191
Location: $TERM

PostPosted: Fri Oct 05, 2012 5:58 pm    Post subject: Reply with quote

Since that ATI takeover, I've lost complete faith on AMD. Intel has the best support, and if I don't need to play heavy games, why will I settle for that ATI graphics which give more problem than Nvidia...

And since my last AMD/ATI experience, I swear I won't buy any more...

Their condition is like VIA. Windows users don't understand Windows runs on AMD and VIA chips, so they refrain from buying -- the rest (linux users), don't have faith on their support.
_________________
Buy from companies supporting opensource -- IBM, Dell, HP, Hitachi, Google etc...
Disfavor companies supporting only Win -- Logitech, Epson, Adobe, Autodesk, Pioneer, Kingston, WD, Yahoo, MSI, XFX
My blog
Back to top
View user's profile Send private message
pjp
Administrator
Administrator


Joined: 16 Apr 2002
Posts: 16116
Location: Colorado

PostPosted: Sat Oct 06, 2012 12:29 am    Post subject: Reply with quote

mdeininger wrote:
Some nice tools that are able to use GPUs do exist already. I've seen betas of some video transcoders and the like. I wouldn't count those as development-related. Sadly that's not too many of them, which is why I pointed out how much most software developers suck at their job.
Yeah. I was thinking more along the lines of general computing. Using the GPU to compile, perform encryption, etc.
_________________
lolgov. 'cause where we're going, you don't have civil liberties.

In Loving Memory
1787 - 2008
Back to top
View user's profile Send private message
mdeininger
Veteran
Veteran


Joined: 15 Jun 2005
Posts: 1740
Location: Emerald Isles, overlooking Dublin's docklands

PostPosted: Sat Oct 06, 2012 11:26 am    Post subject: Reply with quote

pjp wrote:
Yeah. I was thinking more along the lines of general computing. Using the GPU to compile, perform encryption, etc.
Indeed, most of those should be perfectly suited for GPUs. Problem is it'd probably need a radical kernel overhaul (no matter which kernel that would be, except maybe Plan 9). For some reason kernel guys only ever do symmetric multi-processing, not asymmetric multi-processing...
_________________
"Confident, lazy, cocky, dead." -- Felix Jongleur, Otherland

( Twitter | Blog | GitHub )
Back to top
View user's profile Send private message
wildhorse
Tux's lil' helper
Tux's lil' helper


Joined: 16 Mar 2006
Posts: 148
Location: Estados Unidos De América

PostPosted: Sat Oct 06, 2012 2:05 pm    Post subject: Reply with quote

I would not blame the makers of the Linux kernel. The GPU makers are hardly known for their support of the open source community. Why bother with running Linux on a GPU, if you had to deal with things like pseudo-Assembler PTX and other top secret asininities. Not to mention the primitive, flawed software development tools which come with Cuda and OpenCL from the GPU makers.

And there are technical issues which make it difficult to do AMP on a heterogeneous architecture. How does a GPU access a disk controller, for instance? How do you ensure that multi-host access via PCI/PCIe works on every PC, something that was not defined by the PCI/PCIe committees? It is not the fault of the Linux kernel developers that the PC world ditched Futurebus+ in favour of a typical PC solution like PCI.

We will see how things develop. Chances are that with better support for LLVM we may see AMP on Linux one day.
Back to top
View user's profile Send private message
mdeininger
Veteran
Veteran


Joined: 15 Jun 2005
Posts: 1740
Location: Emerald Isles, overlooking Dublin's docklands

PostPosted: Sat Oct 06, 2012 3:42 pm    Post subject: Reply with quote

Uhm, I never said it was a problem specific to Linux. There's no kernel that comes even close, with the possible exception of Plan 9. And it's not like you couldn't route DMA I/O to graphics ram ;).
_________________
"Confident, lazy, cocky, dead." -- Felix Jongleur, Otherland

( Twitter | Blog | GitHub )
Back to top
View user's profile Send private message
wildhorse
Tux's lil' helper
Tux's lil' helper


Joined: 16 Mar 2006
Posts: 148
Location: Estados Unidos De América

PostPosted: Sat Oct 06, 2012 4:27 pm    Post subject: Reply with quote

Oh, my bad that I implied a Linux kernel. I shouldn't do that in an OTW Gentoo Linux forum, I guess.

DMA via PCI/PCIe between the CPU and peripheral devices is quite different from access between two peripheral devices via PCI/PCIe without the CPU in between. The latter is something that the makers of PCI/PCIe have not defined.
Back to top
View user's profile Send private message
mdeininger
Veteran
Veteran


Joined: 15 Jun 2005
Posts: 1740
Location: Emerald Isles, overlooking Dublin's docklands

PostPosted: Sat Oct 06, 2012 5:29 pm    Post subject: Reply with quote

wildhorse wrote:
Oh, my bad that I implied a Linux kernel. I shouldn't do that in an OTW Gentoo Linux forum, I guess.
well no, it is a solid assumption. that's why I put in brackets that this is a common problem with all kernels except Plan 9.

wildhorse wrote:
DMA via PCI/PCIe between the CPU and peripheral devices is quite different from access between two peripheral devices via PCI/PCIe without the CPU in between. The latter is something that the makers of PCI/PCIe have not defined.
there are some more or less standard ways to do it, however. and considering you are running the kernel on the CPU no matter what you do, there's no reason not to have it help out a bit.

and then there's the odd case where a GPU - or any other auxiliary processor - is actually ON the CPU and they're sharing the same ram...
_________________
"Confident, lazy, cocky, dead." -- Felix Jongleur, Otherland

( Twitter | Blog | GitHub )
Back to top
View user's profile Send private message
wildhorse
Tux's lil' helper
Tux's lil' helper


Joined: 16 Mar 2006
Posts: 148
Location: Estados Unidos De América

PostPosted: Sat Oct 06, 2012 8:37 pm    Post subject: Reply with quote

mdeininger wrote:
wildhorse wrote:
DMA via PCI/PCIe between the CPU and peripheral devices is quite different from access between two peripheral devices via PCI/PCIe without the CPU in between. The latter is something that the makers of PCI/PCIe have not defined.
there are some more or less standard ways to do it, however.
No, there is nothing of the sort in the standards. There have been recently a few white papers been published which make some suggestions how to extend the standards and this year we saw the first workaround on the market (Magma). It is going to take a while until that features makes it into the standard. If that ever happens at all. And then we would still have to see it in the PC market.
Quote:
and considering you are running the kernel on the CPU no matter what you do, there's no reason not to have it help out a bit.
You want to use the CPU with its computing power and its memory to handle all the I/O of the other processors? While this works for special applications like those which we already put onto the GPU, that will hardly be of any benefit for most applications. I/O is one of the challenges which have to be addressed before the GPUs do what pjp has in mind.
Quote:
and then there's the odd case where a GPU - or any other auxiliary processor - is actually ON the CPU and they're sharing the same ram...
Forget about I/O via a common RAM. Forget about CPUs and GPUs sharing common RAM. That is just too slow.
Back to top
View user's profile Send private message
tylerwylie
Guru
Guru


Joined: 19 Sep 2004
Posts: 456
Location: /US/Illinois

PostPosted: Sat Oct 06, 2012 8:49 pm    Post subject: Reply with quote

wildhorse wrote:
mdeininger wrote:
wildhorse wrote:
DMA via PCI/PCIe between the CPU and peripheral devices is quite different from access between two peripheral devices via PCI/PCIe without the CPU in between. The latter is something that the makers of PCI/PCIe have not defined.
there are some more or less standard ways to do it, however.
No, there is nothing of the sort in the standards. There have been recently a few white papers been published which make some suggestions how to extend the standards and this year we saw the first workaround on the market (Magma). It is going to take a while until that features makes it into the standard. If that ever happens at all. And then we would still have to see it in the PC market.
Quote:
and considering you are running the kernel on the CPU no matter what you do, there's no reason not to have it help out a bit.
You want to use the CPU with its computing power and its memory to handle all the I/O of the other processors? While this works for special applications like those which we already put onto the GPU, that will hardly be of any benefit for most applications. I/O is one of the challenges which have to be addressed before the GPUs do what pjp has in mind.
Quote:
and then there's the odd case where a GPU - or any other auxiliary processor - is actually ON the CPU and they're sharing the same ram...
Forget about I/O via a common RAM. Forget about CPUs and GPUs sharing common RAM. That is just too slow.
You alright? You haven't revealed eurofapping American-penis-envy and are contributing to a discussion.
Back to top
View user's profile Send private message
wildhorse
Tux's lil' helper
Tux's lil' helper


Joined: 16 Mar 2006
Posts: 148
Location: Estados Unidos De América

PostPosted: Sat Oct 06, 2012 9:11 pm    Post subject: Reply with quote

tylerwylie wrote:
You alright? You haven't revealed eurofapping American-penis-envy and are contributing to a discussion.
This has been a discussion among two civilised Europeans so far.
Wait until BoneKracker joins the discussion. :lol:
Back to top
View user's profile Send private message
roarinelk
Guru
Guru


Joined: 04 Mar 2004
Posts: 444

PostPosted: Sun Oct 07, 2012 7:14 am    Post subject: Reply with quote

[quote="mdeininger"][quote="pjp"]
mdeininger wrote:
[...] which is why I pointed out how much most software developers suck at their job.


It's not that simple. Have you ever written code for a DSP? To get good performance out of it you need to think about
parallel instruction scheduling and interdependencies, cache/memory latencies, etc to avoid any stalls. I imagine that modern GPUs
aren't that different in that regard: lots of tiny multiply-add-units with tiny local caches, connected to larger cache with certain
latencies, controlled by another unit. The amount of man-hours required to tune each algorithm to get even 80% utilization isn't
worth the performance gain. It's far easier to coarsely split the algorithm to run so-so on multicore general purpose CPUs, and you
don't really have to worry about all the low-level cpu-internals.
Back to top
View user's profile Send private message
mdeininger
Veteran
Veteran


Joined: 15 Jun 2005
Posts: 1740
Location: Emerald Isles, overlooking Dublin's docklands

PostPosted: Sun Oct 07, 2012 10:52 am    Post subject: Reply with quote

wildhorse wrote:
tylerwylie wrote:
You alright? You haven't revealed eurofapping American-penis-envy and are contributing to a discussion.
This has been a discussion among two civilised Europeans so far.
Wait until BoneKracker joins the discussion. :lol:
word broham :lol:.

Alright it's not that easy. I never said it's easy, it's really hard even. But it's not impossible.

The question would be whether all those methods really would be too slow. My gut feeling says: it'll be pretty darn slow, but not too slow as to starve any additional processors. Might actually be worth looking into.

Although to properly use a computer the way pjp suggests, this would need so many other changes to the kernel...

roarinelk wrote:
mdeininger wrote:
[...] which is why I pointed out how much most software developers suck at their job.


It's not that simple. Have you ever written code for a DSP? To get good performance out of it you need to think about
parallel instruction scheduling and interdependencies, cache/memory latencies, etc to avoid any stalls. I imagine that modern GPUs
aren't that different in that regard: lots of tiny multiply-add-units with tiny local caches, connected to larger cache with certain
latencies, controlled by another unit. The amount of man-hours required to tune each algorithm to get even 80% utilization isn't
worth the performance gain. It's far easier to coarsely split the algorithm to run so-so on multicore general purpose CPUs, and you
don't really have to worry about all the low-level cpu-internals.
To answer your first question: yes I have written DSP code in the past. I've even designed custom ASICs for some projects. It's not all that hard. 80% utilisation on a decent DSP is not all that hard (although not all DSPs are decent.) Every computer scientist or circuit engineer SHOULD be able to do this just fine. But that's not the kind of developer I'm talking about.

The kind of "developer" I'm talking about is unable to use a well-documented class because they can't find comments in headers. It's the kind that tackles new problems by smorgasboarding examples from all over the net together and beating them with a hammer (or the borland c++ compiler) until they sorta work. They come up behind you when you're working on backend stuff to tell you "oi, that button should go 3 pixels to the right!" - "but I'm not even working on the UI..." - "3 PIXELS TO THE RIGHT!!!!!!" -- And there's A LOT of developers like that running around in the wild. That's why I'm saying most of them suck at their job.
_________________
"Confident, lazy, cocky, dead." -- Felix Jongleur, Otherland

( Twitter | Blog | GitHub )
Back to top
View user's profile Send private message
pjp
Administrator
Administrator


Joined: 16 Apr 2002
Posts: 16116
Location: Colorado

PostPosted: Tue Oct 09, 2012 4:09 am    Post subject: Reply with quote

wildhorse wrote:
You want to use the CPU with its computing power and its memory to handle all the I/O of the other processors? While this works for special applications like those which we already put onto the GPU, that will hardly be of any benefit for most applications. I/O is one of the challenges which have to be addressed before the GPUs do what pjp has in mind.
How is I/O an issue, given that existing cards exist to perform similar functions? Graphics is the most obvious example, but I've seen servers with encryption hardware to offload those tasks. How is this different? (I'm assuming you're correct, just looking to understand).


mdeininger wrote:
Although to properly use a computer the way pjp suggests, this would need so many other changes to the kernel...
So for graphics, these "changes" (or at least comparable ones) already exist?

For compiling, one of the options I was thinking of was along the lines of distcc.


Based on the above, in general, it unfortunately sounds like AMD has gone the wrong path for me for the foreseeable future. I have no real use for graphics performance.
_________________
lolgov. 'cause where we're going, you don't have civil liberties.

In Loving Memory
1787 - 2008
Back to top
View user's profile Send private message
mdeininger
Veteran
Veteran


Joined: 15 Jun 2005
Posts: 1740
Location: Emerald Isles, overlooking Dublin's docklands

PostPosted: Tue Oct 09, 2012 7:58 am    Post subject: Reply with quote

pjp wrote:
mdeininger wrote:
Although to properly use a computer the way pjp suggests, this would need so many other changes to the kernel...
So for graphics, these "changes" (or at least comparable ones) already exist?
well, to run any kind of binary, what you need at the bare minimum is a loader for the binary and something in the kernel that deals with the binary's ABI. And of course, the binaries themselves.

Now for, say, NVIDIA GPU code, I don't remember there being any linux system ABI to work with. So someone would have to come up with that. Then someone would have to define a binary format with that and tell the kernel about it. This could've been a lot easier if the kernel crowd would've accepted FatELF binaries back when those were hot, but no use crying about it now.

At the same time, someone would need to teach a general purpose C compiler how to generate NVIDIA GPU code to get binaries to use. I'm guessing the LLVM folks are working on that already, probably also on an extension to ELF to carry it.

So, not quite there yet if I got you right in the first place.

pjp wrote:
For compiling, one of the options I was thinking of was along the lines of distcc.
That could actually be quite a lot easier. Loading the compiler onto the GPU could already be done in userspace, right now. Same for uploading the source code and then downloading the resulting object files. All that'd need is a compiler in the proper code to start. And again that's what the LLVM folks are probably up to already.

pjp wrote:
Based on the above, in general, it unfortunately sounds like AMD has gone the wrong path for me for the foreseeable future. I have no real use for graphics performance.
It's anybody's guess what the AMD people are thinking lately. But NVIDIA's still as good as it used to these days... never had any trouble at all with their gear on Linux.
_________________
"Confident, lazy, cocky, dead." -- Felix Jongleur, Otherland

( Twitter | Blog | GitHub )
Back to top
View user's profile Send private message
pjp
Administrator
Administrator


Joined: 16 Apr 2002
Posts: 16116
Location: Colorado

PostPosted: Wed Oct 10, 2012 3:19 am    Post subject: Reply with quote

Sounds like progress on LLVM will be a significant "breakthrough" point.

I'm waiting to see next month's Piledriver / Vishera and what it can do. I don't need anything right now, so that may do well enough if I decide I need something newer. Trinity should be fine for an audio/HTPC I plan to work on (TV tuner card / functionality is the hold up).
_________________
lolgov. 'cause where we're going, you don't have civil liberties.

In Loving Memory
1787 - 2008
Back to top
View user's profile Send private message
pigeon768
l33t
l33t


Joined: 02 Jan 2006
Posts: 669

PostPosted: Wed Oct 10, 2012 4:29 pm    Post subject: Reply with quote

I would be supremely surprised if anyone got a compiler running on a GPU anytime soon. It's just not designed for that type of work. There's no heavy math in compilation: it's all logic and regex, which GPUs are terrible at.
_________________
My political bias.
Back to top
View user's profile Send private message
mdeininger
Veteran
Veteran


Joined: 15 Jun 2005
Posts: 1740
Location: Emerald Isles, overlooking Dublin's docklands

PostPosted: Wed Oct 10, 2012 10:19 pm    Post subject: Reply with quote

pjp wrote:
Sounds like progress on LLVM will be a significant "breakthrough" point.
quite. it's really great to finally see a decent compiler crew emerge from open source land :).

pigeon768 wrote:
I would be supremely surprised if anyone got a compiler running on a GPU anytime soon. It's just not designed for that type of work. There's no heavy math in compilation: it's all logic and regex, which GPUs are terrible at.
this isn't *quite* true. one thing GPUs excel at is doing extremely much in parallel, and compiling code is one of those tasks that are really great to parallelise. not all stages of course, but almost all of the optimisation and transformation stages are perfect for a GPU.
_________________
"Confident, lazy, cocky, dead." -- Felix Jongleur, Otherland

( Twitter | Blog | GitHub )
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Off the Wall All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum