Forums

Skip to content

Advanced search
  • Quick links
    • Unanswered topics
    • Active topics
    • Search
  • FAQ
  • Login
  • Register
  • Board index Discussion & Documentation Gentoo Chat
  • Search

Any comments on the latest Distrowatch Weekly?

Opinions, ideas and thoughts about Gentoo. Anything and everything about Gentoo except support questions.
Post Reply
  • Print view
Advanced search
39 posts
  • Previous
  • 1
  • 2
Author
Message
StifflerStealth
Retired Dev
Retired Dev
User avatar
Posts: 968
Joined: Wed Jul 03, 2002 8:20 pm

  • Quote

Post by StifflerStealth » Thu Sep 06, 2007 3:26 pm

9 months ago I left Gentoo for Arch. At that time, portage was dog slow. I used straight portage, no plugins -- no crap. Pacman in Arch just seemed like a god send because of the speed. I was tired or wating a a very long time for the dependencies to calculate and the spinner thing would spin very slowly, if at all. Now after all this time, I finally try out Gentoo again and it is faster than the previous version nine months ago. I notice the speed improvement. It could be from what Drobbins said that the Devs are not doing as complex of dependency calculation anymore. Though, this is both good and bad.

So, is Gentoo dying like people keep claiming? I don't think so. I think it's evolving. It may not be the same Gentoo it was 5 years ago, but it's still Gentoo. Everything evolves and changes. If the evolution is not a good one, then it can evolve again, but one thing is sure: There will always be devs and users for this distro in one form or another. Maybe in the future portage will be replaced by something else like pkg-core. Changing the package manager does not change the distro. Granted, I do see room for improvements in Gentoo, but I also see things that are rock solid and well done -- the same with all distributions.

In the Linux world, nothing really dies, even the distros that have long since past -- they have just evolved into other distros. Of course, progress for the sake of progresses stake should not be done, but progress where needed should be done.

Like, I feel that in todays world, nothing should be defaulted to be compiled for i383. Whoever puts Gentoo on an i386 should be tested in the mental health department. But, I do have a valid reason. The i386 lacks features that are now standard even on todays fastest processor, i.e. copy on write. The i486 has it, and it has a load of features still in use today. This is one problem I see with all of the Linux community: They try to support old crap with each version. This is wrong beyond wrong. Version numbers are there to show support, like version 1 supports i386 to blah, version 2 supports blah to something new. It's okay to maintain two versions at once, or even three versions. You drop out support for all the old crap in newer versions, thus allowing progress and new features that were never dreamed possible because of what the new hardware can do. Thus the follow of e17. It's nice, but they decided to try to support old crap with eyecandy: the old crap can't support it anyways, so drop old crap and go with hardware rendering and you could do more with less cpu cycles. This is the major problem in Linux: The old crap is trying to be supported. Like in the kernel. We are up to version 2.6 of the kernel and they still have support for i386 in it. I say, drop everything below i686 in the kernel now. The max profile that can be done is a i686 profile. Bump that up in Glibc: Drop everything below i686 and create new profiles for the newer hardware. This is called evolution, and stopping evolution for the sake of preserving the past is just as bad as progress for the sake of progress. Linux is stuck in the 1980's with technology. Think of what could be done if the old crap was dropped out in new versions and the new versions written to support the features of modern-ish hardware. The possibilities are endless.

This is what the article fails to mention. They talk about Gentoo and it's shortcomings, but what about the short comings of Linux as a whole? The reasons why Linux is slow sometimes. and the kernel is freaking huge download because they have support in there from the 1990's. So, it would be okay for things to change. We don't need to keep supporting old stuff in the new products. The old stuff really can't run the new stuff well anyways, and the old versions can still be maintained, thus allowing development of features that the old stuff can run and run well, and allowing development of new technology that just kicks Microsoft's ass.

EDIT: The main point of this is that one cannot support everything and expect greatness. Like e17 is going the software route to try to support everything from the old to hardware that can't handle hardware acceleration, and trying to support everything, they limit themselves to what they can do and make it really processor intensive even for newer stuff. Thus, a WM that supports hardware acceleration and only runs on the hardware capable of it will be vastly superior. You can't support everything. Drop support for everything and things tend to speed up. ;) It's okay to have multiple versions to vary the support or even multiple trees to make things simple.

Cheers.
Top
Corona688
Veteran
Veteran
User avatar
Posts: 1204
Joined: Sat Jan 10, 2004 7:51 pm

  • Quote

Post by Corona688 » Thu Sep 06, 2007 3:47 pm

Genone wrote:I don't think anybody has ever said that portage is faster than paludis, so no clue why you come up with that now. After all paludis uses a super-fast DB backend .. oh wait, i doesn't, so I guess that a DB backend isn't the magic solution to performance problems, just like pkgcore shows that a different implmenetation language isn't a magic solution either.
Disk is a bottleneck, but certainly not the only one. A very long time is spent on dependency calculation, something that would take less time in a compiled language than a scripted/interpreted one. The difference when CPU-bound is hardly insignificant.

Instead of replacing portage altogether, though, perhaps an external library could be made to speed up dependency calculation. I know python supports such external libraries, I'll have to look into that...
Petition for Better 64-bit ATI Drivers - Sign Here
http://www.petitiononline.com/atipet/petition.html
Top
96140
Retired Dev
Retired Dev
Posts: 1324
Joined: Sun Jan 23, 2005 9:18 pm

  • Quote

Post by 96140 » Thu Sep 06, 2007 4:05 pm

--
Last edited by 96140 on Fri Sep 13, 2013 9:05 am, edited 1 time in total.
Top
Corona688
Veteran
Veteran
User avatar
Posts: 1204
Joined: Sat Jan 10, 2004 7:51 pm

  • Quote

Post by Corona688 » Thu Sep 06, 2007 5:00 pm

StifflerStealth wrote:Like, I feel that in todays world, nothing should be defaulted to be compiled for i386. Whoever puts Gentoo on an i386 should be tested in the mental health department.
The stage tarballs should not be i386. Or at the very least, should have i686 available. Changing chost is a pain. But i386 is not as obsolete as you think.

With respect to desktop computers, yes, the 80386 is dead and buried, but the 80386 core is cheap, small, 32-bit, virtual-memory, and basically public domain. This makes it attractive to use in many embedded systems. These sort of things can't even run stock DOS, but linux is flexible enough; without i386, there might be no free or even halfway standard OS for them at all, let alone a POSIX multitasking one.

Remember that Gentoo is more or less a distro-kit. I would not install Gentoo on one of these devices, that would be insane. But I could use a Gentoo computer to build everything I needed for one, right down to the binary packages to be installed. As a programmer and developer, I'd be quite irked if they ripped out this ability based on fundamentals misunderstanding about CPU features and kernel support.
But, I do have a valid reason. The i386 lacks features that are now standard even on todays fastest processor, i.e. copy on write.
So don't compile your kernel for i386. These memory-management features aren't particularly relevant to the CHOST of the whole system, the kernel is the only place these instructions are used.
This is one problem I see with all of the Linux community: They try to support old crap with each version. This is wrong beyond wrong. Version numbers are there to show support, like version 1 supports i386 to blah, version 2 supports blah to something new. It's okay to maintain two versions at once, or even three versions. You drop out support for all the old crap in newer versions, thus allowing progress and new features that were never dreamed possible because of what the new hardware can do.
And so it does. The kernel is extremely flexible. It doesn't limit you to the functionality of the i386 unless that is the CPU you actually chose.
I say, drop everything below i686 in the kernel now. The max profile that can be done is a i686 profile. Bump that up in Glibc: Drop everything below i686 and create new profiles for the newer hardware. This is called evolution, and stopping evolution for the sake of preserving the past is just as bad as progress for the sake of progress.
i386 still exists. Your PC is not the universe. Evolution is not stopped anyway, you're not stuck with i386 features unless that is the CPU you actually choose.
Linux is stuck in the 1980's with technology.
Not so. I notice the kernel has many options, only one of which is i386. The differences between i386, i486, and i686 from an application software perspective are, frankly, very minor; after all, they don't manage COW, the kernel does. Things like MMX are tempting to assume, but there are even modern x86 CPU's that don't have them. I've used one, it's a strange thing by VIA inside a hardware traffic shaper. It runs linux, of course.
Think of what could be done if the old crap was dropped out in new versions and the new versions written to support the features of modern-ish hardware. The possibilities are endless.
Like what?
This is what the article fails to mention. They talk about Gentoo and it's shortcomings, but what about the short comings of Linux as a whole? The reasons why Linux is slow sometimes. and the kernel is freaking huge download because they have support in there from the 1990's.
Not true. Some old things have been removed, others are in the process of being removed, and overall, the old code(and cpu-specific code generally) is very small. The reasons the kernel source is huge is:
  • It supports so very many kinds of devices, many of which you'll never encounter but others find vital.
  • Transition periods between new driver models being introduced and old ones being removed. OSS has a big DEPRECIATED stamp on it but isn't quite gone yet. They'd ditch it if they could, too many damn things have a deathgrip on it.
  • They are not merely x86 kernel sources. You're geting complete kernel sources, not merely x86. You're getting source for alpha, several kinds of ARM, avr32, h8300, m32r, m68k, mips, parisc, ppc, s390, sh64, sparc, um, v850, xtensa, and x86_64, plus device drivers for things specific to any of these architectures.
The x86-specific code -- all of it from i386 to P4 -- is quite miniscule. As it should be. The linux/CPU memory management interface is, by nature, minimal, tiny, modular, and only used in a handful of places. It costs shockingly little to at least keep the i386 option there if needed, and it is needed, just not by you.

I do think that the alpha and m68k architectures should be dropped. They haven't made those in decades, the code's probably broken from lack of maintenance, and compatibility with these architectures was never great in the first place. That would save you six megs or so.
The old stuff really can't run the new stuff well anyways
Probably true application-wise, not so much kernel-wise.
and allowing development of new technology that just kicks Microsoft's ass.
Like what?
Drop support for everything and things tend to speed up. ;)
Linux does not support everything unless you build everything in. In fact the Linux kernel tends to be highly tuned to the particular system you build it for, to the point of reduced performance on others; they recently introduced a "generic" CPU option to avoid this, generating general-purpose code that works well on most any CPU instead of blazing fast on your particular one.
Petition for Better 64-bit ATI Drivers - Sign Here
http://www.petitiononline.com/atipet/petition.html
Top
Corona688
Veteran
Veteran
User avatar
Posts: 1204
Joined: Sat Jan 10, 2004 7:51 pm

  • Quote

Post by Corona688 » Thu Sep 06, 2007 5:28 pm

nightmorph wrote:You shoulda checked your facts s'more before making that blanket statement that interpreted languages are slower
My facts are fine, thanks. You can write fast or slow code in either, of course. I'm also perfectly aware of why people use interpreted languages; they're handy, more than fast enough by most standards, and interpreting speed usually isn't the bottleneck anyway. I like interpreted languages, I use them all the time. But all else being equal, code that doesn't pass through an interpreting layer runs faster than code that does.

I also think you've got me pegged wrong. I understand how years of playing whackamole against "OMG gentoo is written in scripting language" type posts can frustrate someone, but I'm not saying portage is slow because it's written in python, alone or in part. I'm not even calling it slow, not since 2.x came out anyhow. I don't even want to replace it, just augment it in spots, adding python "builtins" for a few crucial CPU-bound things. I'm not even asking the Gentoo devs to do this. I'm curious, I want to see what I can do myself.

If I end up writing up C equivalents for all python functionality used in dependency calculation, I could hardly expect it to be faster. Yet if there's any unneeded steps happening, C can skip them if anything can.
Petition for Better 64-bit ATI Drivers - Sign Here
http://www.petitiononline.com/atipet/petition.html
Top
PCalitrack
n00b
n00b
Posts: 20
Joined: Sun Aug 05, 2007 7:52 pm
Location: Berkeley, CA

  • Quote

Post by PCalitrack » Thu Sep 06, 2007 5:44 pm

I don't think Gentoo is dying... I just think it will never attract those first-time Linux users. Half the reason people try Linux in the first place is for the challenge. I started with Mandrake 10, and at the the time that was a huge challenge. Then, Ubuntu came along, and I was able to really jump into Linux with its ease of setup. With knowledge from about 1-2 years of Ubuntu, I moved on to Arch Linux, seeking a new challenge. Arch was a little more difficult, but in the end I was always looking toward using Gentoo (I just didn't have the ability). After a few months of Arch, I decided to make the leap to Gentoo, and I since then I have never looked back.

Gentoo is stable, allows endless customization, and IMO has more documentation than just about any distribution out there. I think people are moving to Gentoo all the time, but they have to work through many other distributions before they reach this point.

The only negatives that I experienced with Gentoo on first use was that learning Gentoo felt like you are learning Gentoo and not Linux/Unix. There are too many random tools that could be done away with. However, once you learn these, you are off to the races.

I don't think Gentoo will die as long as there continue to be people out there who want the challenge and control of a fully-customizable system.
Top
jonnevers
Veteran
Veteran
User avatar
Posts: 1594
Joined: Thu Jan 02, 2003 6:59 pm
Location: Gentoo64 land

  • Quote

Post by jonnevers » Thu Sep 06, 2007 5:55 pm

Genone wrote:
omp wrote:Case A: Not-masked foo depends on masked bar depends on masked baz.
Case B: Not-masked foo depends on masked bar and masked baz.

While not showing baz in case A would make sense, for the assuming user will unmask reason which you mentioned, I don't see why both foo and baz can't be shown in case B. By showing both in case B, you would not be anticipating the user unmasking bar.

I hope the above makes sense; I might not have clearly explained what I meant. :)
Yeah, in such limited scenarios it might be safe to display both as the possible solutions for bar and baz don't affect each other. But in case A the solutions for baz depend on which solution the user takes for bar (bar-1 depends on baz-1 while bar-2 depends on baz-2, and so on).
well, I think this is being overcomplicated but I certainly appreciate the interest. If portage can now show how packages are moving between overlays during updates (the [0=>1] stuff), I don't see how a form of dependency level couldn't also be devised.

If it's in the depgraph, it should all come out in --pretend. Perhaps its my perspective that masked packages shouldn't be fatal. Obviously portage can't continue with the merge at that point but it's just so frustrating to unmask a package only to have another fatal mask on the next run of portage, and on and on.

maybe i use too much experimental software but thats not going to change anytime soon.

it's be awesome to be able to pull an experimental gnome meta-ebuild, and have a simple emerge -pv gnome show me *all* of the ~ARCH packages it would need to satisfy the request.

or even a hook into portage, where I can define something in make.conf that gets triggered if a masked package is hit. Make it optionally fatal or optionally handled according to the user's wishes. I've coded a java utility to manage my package.* files, so a hook could just call this utility to keyword/unmask/etc the package in question. I understand there are a ton of peripheral issues around this particuliar approach.

unfortunately, python isn't my strongest language and I don't think there was ever a bgo report for this (at least not by me).

here is the quick and dirty patches I used to apply to portage (sorry I even forget what portage it would even apply to):
http://files.monkeydisaster.net/files/p ... atal.patch

the code around this so different now I couldn't figure out how to make it work :cry:
Top
Corona688
Veteran
Veteran
User avatar
Posts: 1204
Joined: Sat Jan 10, 2004 7:51 pm

  • Quote

Post by Corona688 » Thu Sep 06, 2007 6:24 pm

PCalitrack wrote:The only negatives that I experienced with Gentoo on first use was that learning Gentoo felt like you are learning Gentoo and not Linux/Unix. There are too many random tools that could be done away with.
It might look that way, but I'd argue the opposite. Mandrake was livid when I compiled and installed my own kernel; it threw a snit and never installed anything else ever again; Mandrake makes you do things the Mandrake way, or else. Gentoo deals with custom kernels gracefully, checking what you're using instead of blindly following its dependency tree; Gentoo gives you choices.
Petition for Better 64-bit ATI Drivers - Sign Here
http://www.petitiononline.com/atipet/petition.html
Top
steveL
Watchman
Watchman
Posts: 5153
Joined: Wed Sep 13, 2006 1:18 pm
Location: The Peanut Gallery

  • Quote

Post by steveL » Fri Sep 07, 2007 5:44 am

Corona688 wrote:But all else being equal, code that doesn't pass through an interpreting layer runs faster than code that does.
That's true.
If I end up writing up C equivalents for all python functionality used in dependency calculation, I could hardly expect it to be faster. Yet if there's any unneeded steps happening, C can skip them if anything can.
Eh? It would indeed be faster (as above.) And C can't skip steps at all; that's down to the algorithm/s used. The point is it would be really hard to maintain in C; pkgcore simply uses C to speed up the bottlenecks. And its dep resolution is blindingly fast.
Top
Corona688
Veteran
Veteran
User avatar
Posts: 1204
Joined: Sat Jan 10, 2004 7:51 pm

  • Quote

Post by Corona688 » Fri Sep 07, 2007 4:25 pm

steveL wrote:Eh? It would indeed be faster (as above.)
Not much if I end up doing everything Python was doing, the exact same way. If I have to roll my own dynamic variable type checking code, for example, I can't expect that to run faster than python's, which is also written in C.
And C can't skip steps at all; that's down to the algorithm/s used.
...and the interpreter involved. C doesn't spend time on dynamic variable type checking, it checks that once, at compile time. And so forth. A compiler decides an awful lot of things in advance that interpreters would spend time checking during execution.
The point is it would be really hard to maintain in C
Exactly, which is why I propose not a rewrite in C, but extensions in C to speed up bottlenecks.
pkgcore simply uses C to speed up the bottlenecks.
...see?

Except as far as I can tell pkgcore's not actually a portage enhancement, is it? It's a portage alternative. And I like portage.
Petition for Better 64-bit ATI Drivers - Sign Here
http://www.petitiononline.com/atipet/petition.html
Top
enderandrew
l33t
l33t
User avatar
Posts: 731
Joined: Tue Oct 25, 2005 8:37 am

  • Quote

Post by enderandrew » Thu Sep 13, 2007 3:56 am

Genone wrote:Maybe because the existing module didn't work with the to-be-released portage versions at that time? (which the author was well aware of, see [bug]83371[/bug]). Also I was more interested in references about th claims you made regarding the behavior of portage devs. Who said it couldn't be done? (whatever "it" is, the main problem with all those "db backend" discussions is that people never properly define what they consider to be a "db backend", also see http://dev.gentoo.org/~genone/docs/fosdem-2007-talk.pdf )
The author wrote his model on the existing portage, and a very simple fix got it working for the next version. Regardless, portage devs said over and over and over again that a db model would offer no increase in performance, except it did. And when they decided to implement a system of their own and redesign the cache, they didn't look at any existing user implementations, but rather wrote a new one from scratch. If their new system proved superior to any existing one, that would be okay, except using the cdb method still offers performance increase over the in-portage system. Way to discourage input from your community.
About cdb cache plugin:
a) the cdb backend of that module was only available on some archs, which alone is enough reason to not make it a default
b) Nick (carpaski) explicitly said that it might be included at some point
There are enough devs that it is hard to make blanket statements for all devs, but the majority seemed against any db method, period. carpaski did support a db method, and suggested a more generic any-db method that would support all arches which solves problem #1.
c) Just look at the thread you linked first, you'll see a significant number of people reporting problems caused by the additional dependency (e.g. "cdb module not found")
Any each of those problems were chalked up to people not following directions, and when they installed it correctly, it worked like a charm. If this were the default, people wouldn't have to manually configure it and run into these problems.
d) the point of dependencies is to evaluate the cost vs. the benefits we get of them. Python adds one (heavy) dep for the benefit of a nice RAD programming language with a huge standard library (paludis trades it for a direct dependency on gcc and libstc++ as well as some other support libraries). Cdb would have been a minor, but possibly critical dependency, just for somewhat better performance of some operations.
How many problems have their been with python upgrades and changes over the years? It has been said before, and I'll say it again. The exact same features can be replicated with improved performance in C or C++ and Bash. You immediately remove dependencies, improve performance, and lose nothing. I have never once seen a good arguement for keeping portage on python. Rather the argument has always been, the guys who initially designed it knew python, and there aren't enough benefits to warrant a change/rewrite. Last time I checked, lessened dependencies, smaller stages, fewer problems with python upgrades, smaller embedded systems, improved performance, and one-less-thing to troubleshoot are all serious, tangible benefits to a rewrite.
About rewrites:
Noone ever said that a rewrite wouldn't offer any benefits (at least not in the threads I've been involved in). What has been said is that
Let me quote you directly from such a thread. You dismissed a C/C++ rewrite with the following post:
(dis)advantages of C/C++ over python:
The speed improvements would not be that much.
Again, this only flies in the face of real, quantifiable benchmarks of rewrites that do in the real world perform MUCH better. But why should facts and logic factor in here? I'd rather have one sentence subjective dismissals.
a) a rewrite won't be faster just because of a different language
Again, one sentence dismissals that completely ignore the fact that a rewrite has happened, and it is faster.
b) the cost of a rewrite would be too high for the gain ("cost" doesn't just mean the initial development time)
Two rewrites are already largely done. You say there is little to no gain, which is false, and say development would be too high, despite the fact that development is largely done.
I don't think anybody has ever said that portage is faster than paludis, so no clue why you come up with that now. After all paludis uses a super-fast DB backend .. oh wait, i doesn't, so I guess that a DB backend isn't the magic solution to performance problems, just like pkgcore shows that a different implmenetation language isn't a magic solution either.
Can you contradict yourself more in two sentences? You say that no one insists that these other implementations aren't faster, and then turn right around and suggest the alternatives aren't any better.

Fact: You've said repeatedly (as most devs have) that a new language and a rewrite won't offer improvements.

Fact: Rewrites are blowing away portage with benchmarks.

Please attempt to reconcile those two statements.
Nihilism makes me smile.
Top
Genone
Retired Dev
Retired Dev
User avatar
Posts: 9656
Joined: Fri Mar 14, 2003 6:02 pm
Location: beyond the rim

  • Quote

Post by Genone » Thu Sep 13, 2007 9:30 am

enderandrew wrote:
About rewrites:
Noone ever said that a rewrite wouldn't offer any benefits (at least not in the threads I've been involved in). What has been said is that
Let me quote you directly from such a thread. You dismissed a C/C++ rewrite with the following post:
(dis)advantages of C/C++ over python:
The speed improvements would not be that much.
Again, this only flies in the face of real, quantifiable benchmarks of rewrites that do in the real world perform MUCH better. But why should facts and logic factor in here? I'd rather have one sentence subjective dismissals.
a) a rewrite won't be faster just because of a different language
Again, one sentence dismissals that completely ignore the fact that a rewrite has happened, and it is faster.
Maybe you should actually read what I said:
- "that much faster" - maybe I was wrong there, but that was a subjective statement in the first place
- "just because of a different language" - that's the key point, portage isn't slow due to the language, but due to some aspects of the implementation (dbapi access model being one of the main issues)
b) the cost of a rewrite would be too high for the gain ("cost" doesn't just mean the initial development time)
Two rewrites are already largely done. You say there is little to no gain, which is false, and say development would be too high, despite the fact that development is largely done.
Thanks for putting words in my mouth: x < y does not imply that x is close to zero. And you obviously ignored the part where I said that "cost" isn't just development time.
I don't think anybody has ever said that portage is faster than paludis, so no clue why you come up with that now. After all paludis uses a super-fast DB backend .. oh wait, i doesn't, so I guess that a DB backend isn't the magic solution to performance problems, just like pkgcore shows that a different implmenetation language isn't a magic solution either.
Can you contradict yourself more in two sentences? You say that no one insists that these other implementations aren't faster, and then turn right around and suggest the alternatives aren't any better.
You like turning statements into their complete opposite, don't you? So there's no point in continuing this "debate" if you can't read other peoples arguments properly (EOD for me).
Top
steveL
Watchman
Watchman
Posts: 5153
Joined: Wed Sep 13, 2006 1:18 pm
Location: The Peanut Gallery

  • Quote

Post by steveL » Thu Sep 13, 2007 1:50 pm

Corona688 wrote:If I have to roll my own dynamic variable type checking code, for example, I can't expect that to run faster than python's, which is also written in C.
Er that's kinda besides the point: for a start it'll run slower in python, because every instruction is interpreted. Secondly, if you really need that kind of thing you don't use C. That's what C++ or Python or [insertName] is for.
And C can't skip steps at all; that's down to the algorithm/s used.
...and the interpreter involved. C doesn't spend time on dynamic variable type checking, it checks that once, at compile time. And so forth. A compiler decides an awful lot of things in advance that interpreters would spend time checking during execution.
Yes, I think we both know enough about languages.
The point is it would be really hard to maintain in C
Exactly, which is why I propose not a rewrite in C, but extensions in C to speed up bottlenecks.
pkgcore simply uses C to speed up the bottlenecks.
...see?
No not really. I didn't see a proposal anywhere, just some statements about changes you are making to portage. Quoting what I have said as if it's your proposal is a bit strange. The real point you're missing is that we're actually trying to help you avoid wasting a load of time.
Except as far as I can tell pkgcore's not actually a portage enhancement, is it? It's a portage alternative. And I like portage.
Yeah, [topic=546828]so do I[/topic]. The point is that it's also a spaghetti-mountain of code that's been accreted over the years, which is why the main developer did a rewrite. He could not do this while maintaining the code so he passed it on to other devs on the team and wrote pkgcore. In addition the portage team will soon release a new version (2.2) as they have mentioned. The two teams collaborate afaict, and do so quite effectively. If you want to help out, please do: just be aware of what work is already being done, and don't reinvent stuff that other people have already implemented, in my opinion.
Top
renrutal
Tux's lil' helper
Tux's lil' helper
User avatar
Posts: 135
Joined: Sat Mar 26, 2005 3:28 am
Location: Brazil

  • Quote

Post by renrutal » Fri Sep 14, 2007 1:39 pm

May we split the topic? I'd like to reply to the original post but people have hijacked the thread, I don' even know where to begin.
Top
Post Reply
  • Print view

39 posts
  • Previous
  • 1
  • 2

Return to “Gentoo Chat”

Jump to
  • Assistance
  • ↳   News & Announcements
  • ↳   Frequently Asked Questions
  • ↳   Installing Gentoo
  • ↳   Multimedia
  • ↳   Desktop Environments
  • ↳   Networking & Security
  • ↳   Kernel & Hardware
  • ↳   Portage & Programming
  • ↳   Gamers & Players
  • ↳   Other Things Gentoo
  • ↳   Unsupported Software
  • Discussion & Documentation
  • ↳   Documentation, Tips & Tricks
  • ↳   Gentoo Chat
  • ↳   Gentoo Forums Feedback
  • ↳   Duplicate Threads
  • International Gentoo Users
  • ↳   中文 (Chinese)
  • ↳   Dutch
  • ↳   Finnish
  • ↳   French
  • ↳   Deutsches Forum (German)
  • ↳   Diskussionsforum
  • ↳   Deutsche Dokumentation
  • ↳   Greek
  • ↳   Forum italiano (Italian)
  • ↳   Forum di discussione italiano
  • ↳   Risorse italiane (documentazione e tools)
  • ↳   Polskie forum (Polish)
  • ↳   Instalacja i sprzęt
  • ↳   Polish OTW
  • ↳   Portuguese
  • ↳   Documentação, Ferramentas e Dicas
  • ↳   Russian
  • ↳   Scandinavian
  • ↳   Spanish
  • ↳   Other Languages
  • Architectures & Platforms
  • ↳   Gentoo on ARM
  • ↳   Gentoo on PPC
  • ↳   Gentoo on Sparc
  • ↳   Gentoo on Alternative Architectures
  • ↳   Gentoo on AMD64
  • ↳   Gentoo for Mac OS X (Portage for Mac OS X)
  • Board index
  • All times are UTC
  • Delete cookies

© 2001–2026 Gentoo Foundation, Inc.

Powered by phpBB® Forum Software © phpBB Limited

Privacy Policy

 

 

magic