Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
systemd / udev Slugfest Number Elevendy-Eleven
View unanswered posts
View posts from last 24 hours

Goto page 1, 2  Next  
Reply to topic    Gentoo Forums Forum Index Gentoo Chat
View previous topic :: View next topic  
Author Message
golding
Apprentice
Apprentice


Joined: 07 Jun 2005
Posts: 232
Location: Adelaide / South Australia

PostPosted: Fri Feb 22, 2013 5:02 am    Post subject: systemd / udev Slugfest Number Elevendy-Eleven Reply with quote

Having been away from the system for at least the last two years, I was somewhat surprised to see this /run/ in my / dir. Add to that being unable to do a search for 'run' in these forums due to the 'stop list' I had real trouble finding out what the hell is going on.

Poettering is to "blame" from what I have been able to find out, maybe not as the inventor, but certainly as the lead "pusher" for inclusion.

I wonder why /var could not be adjusted to include 'run' as a sub-dir? If the system can be arranged so /run/ is mounted early enough, then they could have done it to /var as well. Personally, run belongs in /var.

Split this off from /run as it was getting off topic. -- JRG
_________________
Regards, Robert

..... Some people can tell what time it is by looking at the sun, but I have never been able to make out the numbers.
Back to top
View user's profile Send private message
mv
Watchman
Watchman


Joined: 20 Apr 2005
Posts: 6747

PostPosted: Fri Feb 22, 2013 9:23 am    Post subject: Reply with quote

Yes, /run would belong under /var and this is where it was under original openrc and where it should be under any sane system, i.e. with a thought-through order of starting processes. Of course, this is in contradiction to the systemd insanity which declares any order as broken and thus needs a starting system (initram and /run hacks) to run. Of course this means that systemd already failed spectacularly its main task (factually, it can only start the system if it is run from a starting system which does not repeat the systemd mistakes), but unsurprisingly the inventors/supportors refuse to admit and instead declare everything as broken which does not match their concept. Of course, everybody with a clear mind could see from the very beginning which concept was really broken. Unfortunately, as very often in computer sciences, it is neither the concept nor the implementation which counts.
Back to top
View user's profile Send private message
depontius
Advocate
Advocate


Joined: 05 May 2004
Posts: 3505

PostPosted: Fri Feb 22, 2013 2:34 pm    Post subject: Reply with quote

Please be careful what needs to be blamed on Poettering and what doesn't. A little history is in order here...

In the "old days" you could always count on the kernel being able to start the system and mount root, all by itself. At that point we had /, /sbin, /lib, and a few other places, that that content sufficed for getting the rest of the system up and running. That includes /var, /usr, /home, etc, etc, etc, which were often placed on different filesystems, perhaps over the network, very likely using extra modules, etc. In the "old days" this very basic root filesystem was the "early userspace".

Enter the likes of RAID, full-disk LUKS, etc... Now the kernel itself is no longer able to even mount root. Extra tools are needed to even get that far, hence the initrd or initramfs. So "early userspace" has moved into initrd/initramfs, and now contains the tools necessary to mount root, itself. This compounds some problems, because for instance /dev and /var/run are both needed before root even exists. The simplest thing was to move them to using tmpfs, hence devtmpfs and using tmpfs as /run. Now they can be built in the early userspace root, and then get remounted over to the real root once it exists. Note that once the real root exists, it's not clear that /var has been properly mounted yet, hence /run instead of /var/run. All of this tomfoolery is supposed to be over once control has been turned over to the real root, not postponing taking care of /var/run until some time after /var has been mounted.

Having to use tools to get root mounted is the real cause, here. (or at least having to be able to use tools to get root mounted - we don't all have RAID or LUKS roots.)

From what I can tell, Poettering is trying to eventually completely wrap the kernel in his own "Just Works (TM)" layer. I don't necessarily argue with what he is doing, just how he is doing it. In my opinion the Poettering Layer is horribly opaque to the casual hacker, and accessible only to those who want to become near-developers of that layer. IMHO, THAT is the problem, not the Poettering Layer itself. By the way, think for a minute... First Lennart wrapped sound with pulseaudio, next he wrapped init and miscellaneous services with systemd. He hasn't touched networking or display/input - yet. (Or is he somehow involved in iBus?)
_________________
.sigs waste space and bandwidth
Back to top
View user's profile Send private message
ulenrich
Veteran
Veteran


Joined: 10 Oct 2010
Posts: 1480

PostPosted: Fri Feb 22, 2013 3:31 pm    Post subject: Reply with quote

It has nothing to do with systemd. I wonder if golding had installed systemd two years ago?

And surely /var/run is a broken concept (take a look at Linux history):
At the beginning there was no full featured tmpfs available. The signaling directories "run" and "lock" had to be on a writable disk. Thus /var/run, /var/lock were the place to put it.

Brokeness:
What if /var could not be mounted? That was why Debian didn't recommend an extra partition of /var. Now this broken concept is solved putting /run directly on /-root as a tmpfs.

But:
The reason to remove symbolic links
/var/run -to- /run
/var/lock -to- /run/lock
is related to portage performance issues: https://bugs.gentoo.org/show_bug.cgi?id=453834#c1
Nothing more.
Back to top
View user's profile Send private message
ulenrich
Veteran
Veteran


Joined: 10 Oct 2010
Posts: 1480

PostPosted: Fri Feb 22, 2013 4:19 pm    Post subject: Reply with quote

depontius wrote:
In my opinion the Poettering Layer is horribly opaque to the casual hacker, and accessible only to those who want to become near-developers of that layer. IMHO, THAT is the problem, not the Poettering Layer itself.
For sure you can implement a simple Linux system just using the kernel and busybox. But a "general" distribution wants to consider far more possibilities. Yet there is no alternative to the "horribly opaque" Poettering Layer which makes such an amount of things accessible to the (expert) user.

It comes into mind: "Make your system simple but not simpler"
We usually don't get how it complicates by factors if a few simple things are added to the space of possibilities.

If I read in the forum such
https://forums.gentoo.org/viewtopic-p-7251914.html#7251914
I would guess an automated Poettering approach could solve this problem easily. But I am not that into it yet for sure ...
Back to top
View user's profile Send private message
depontius
Advocate
Advocate


Joined: 05 May 2004
Posts: 3505

PostPosted: Fri Feb 22, 2013 5:43 pm    Post subject: Reply with quote

ulenrich wrote:
depontius wrote:
In my opinion the Poettering Layer is horribly opaque to the casual hacker, and accessible only to those who want to become near-developers of that layer. IMHO, THAT is the problem, not the Poettering Layer itself.
For sure you can implement a simple Linux system just using the kernel and busybox. But a "general" distribution wants to consider far more possibilities. Yet there is no alternative to the "horribly opaque" Poettering Layer which makes such an amount of things accessible to the (expert) user.

Decent partitioning and documentation. I'm not sure how well Poettering's stuff is partitioned - others have said that it is likely a "merging of layers" that has historically proven hazardous with time. I do know that the documentation is rather inaccessible. It either tells you how to use it, or starts you on the path to being a developer. It should be possible to make in-between documentation - it was there for Xorg in the old days.
ulenrich wrote:

It comes into mind: "Make your system simple but not simpler"
We usually don't get how it complicates by factors if a few simple things are added to the space of possibilities.

If I read in the forum such
https://forums.gentoo.org/viewtopic-p-7251914.html#7251914
I would guess an automated Poettering approach could solve this problem easily. But I am not that into it yet for sure ...

If he's got 5 NICs he's beyond the "just works (TM)" point anyway, and needs all the access he can get.
_________________
.sigs waste space and bandwidth
Back to top
View user's profile Send private message
mv
Watchman
Watchman


Joined: 20 Apr 2005
Posts: 6747

PostPosted: Fri Feb 22, 2013 8:21 pm    Post subject: Reply with quote

depontius wrote:
Enter the likes of RAID, full-disk LUKS, etc... Now the kernel itself is no longer able to even mount root.

Yes, if you want to encrypt even / then that's a valid use of an initramfs. However, there is hardly a need to encrypt / when sanity would have been kept to allow most things (including /usr) on a separate partition.
Quote:
This compounds some problems, because for instance /dev and /var/run are both needed before root even exists.

This is true for /dev (obviously) but not true for /var/run if the init-system just first mounts root. All sane init-systems (including Gentoo's baselayout1 and openrc) had no problems to do this. The problem started with the udev developers deciding that signals should no longer be postponed until an appropriate state of the init (or even of the kernel) was reached. This has already Linus driven mad (and led to the firmware-loading debakel). And it is of course Poettering who is to blame for that whether his name appears explicitly with it or not.
Quote:
From what I can tell, Poettering is trying to eventually completely wrap the kernel in his own "Just Works (TM)" layer.

As every interface it covers some situations, including some which were not easily coverable before, excluding others which were trivially coverable before (like separate /usr without initramfs). So the effect is to buy some settings at the cost of other settings. And at the cost of an additional layer of an immense complexity - which is what common sense should tell to everybody is what should be used only as the very last resort.
Quote:
He hasn't touched networking or display/input - yet. (Or is he somehow involved in iBus?)

He has commited the avahi crime and is certainly not innocent for the wild abuse of dbus.
Back to top
View user's profile Send private message
ulenrich
Veteran
Veteran


Joined: 10 Oct 2010
Posts: 1480

PostPosted: Fri Feb 22, 2013 8:36 pm    Post subject: Reply with quote

depontius wrote:
ulenrich wrote:
https://forums.gentoo.org/viewtopic-p-7251914.html#7251914
I would guess an automated Poettering approach could solve this problem easily. But I am not that into it yet for sure ...

If he's got 5 NICs he's beyond the "just works (TM)" point anyway, and needs all the access he can get.
systemd is all about these kind of complicated dependecies of hw and services. If you know how to write a systemd unit this kind of problem should be easy to solve.
Back to top
View user's profile Send private message
mv
Watchman
Watchman


Joined: 20 Apr 2005
Posts: 6747

PostPosted: Fri Feb 22, 2013 8:37 pm    Post subject: Reply with quote

ulenrich wrote:
For sure you can implement a simple Linux system just using the kernel and busybox. But a "general" distribution wants to consider far more possibilities.

That's why distribution should write appropriate init-scripts and configurations and they succeeded with this very well.
Quote:
Yet there is no alternative to ...
It comes into mind: "Make your system simple but not simpler"

And back you are citing platitudes and perhaps believe even Einstein was only praising systemd. I will no longer comment on that, but I understand again very well why you avoid technical facts:
Quote:
I would guess an automated Poettering approach could solve this problem easily. But I am not that into it yet for sure ...
No, certainly you are not into systemd yet for sure, otherwise you would see that the problem discussed there has nothing really to do with the init-system. In the worst case the Poettering approach due to lack of dependencies even makes it impossible to find out automatically the dependent services.
Back to top
View user's profile Send private message
mv
Watchman
Watchman


Joined: 20 Apr 2005
Posts: 6747

PostPosted: Fri Feb 22, 2013 8:47 pm    Post subject: Reply with quote

ulenrich wrote:
systemd is all about these kind of complicated dependecies of hw and services. If you know how to write a systemd unit this kind of problem should be easy to solve.
No, systemd is all about avoiding dependencies by just formulating conditions on a service to start but not dependencies. This is exactly why systemd in the worst case can even make the task impossible, in the best case perhaps some new hacks have already been introduced so that it might be possible anyway. Certainly it is not helpful to use systemd in that situation - in the best case it is not a too severe disadvantage for the task.
Back to top
View user's profile Send private message
mv
Watchman
Watchman


Joined: 20 Apr 2005
Posts: 6747

PostPosted: Fri Feb 22, 2013 9:02 pm    Post subject: Reply with quote

mv wrote:
depontius wrote:
He hasn't touched networking or display/input - yet. (Or is he somehow involved in iBus?)

He has commited the avahi crime and is certainly not innocent for the wild abuse of dbus.

I forgot: Of course, he has already touched display/input: He had thought that it is a good idea if everybody instead of using good old xorg.conf is forced to do the same thing by learning XML and writing configs for his badly documented and complex hal wrapper which essentially did nothing but introduce a superfluous layer. Sounds familiar?
Back to top
View user's profile Send private message
ulenrich
Veteran
Veteran


Joined: 10 Oct 2010
Posts: 1480

PostPosted: Fri Feb 22, 2013 9:09 pm    Post subject: Reply with quote

Ok, another Poettering discussion:
mv wrote:
depontius wrote:
... because for instance /dev and /var/run are both needed before root even exists.
This is true for /dev (obviously) but not true for /var/run if the init-system just first mounts root. All sane init-systems ... had no problems to do this. The problem started with the udev developers deciding that signals should no longer be postponed until an appropriate state of the init (or even of the kernel) was reached.
You really, really assume and think:
A developers wild guess on the state in the boot process will be the sane and proper decission for all systems out there in the wild? "All sane init-systems" in this regard are broken from the start!
And Systemd is all about automated dependencies. This is not possible with such artificial decission taken into consideration.
mv wrote:
As every interface it covers some situations, including some which were not easily coverable before, excluding others which were trivially coverable before (like separate /usr without initramfs). So the effect is to buy some settings at the cost of other settings. And at the cost of an additional layer of an immense complexity
The simple case just works, but for all specials this means additional complexity! Yes it is, this is reality. This is why systemd has to provide such an awfull lot of man pages!

Complexity will grow after todays announcement of Poettering:
Quote:
we actually make more and more optional of systemd. For example, in systemd git PolicyKit is not only runtime-optional (which it has been for a long time) but now also compile-time optional. And there's more...

And Kay Sievers:
Quote:
Systemd is supposed to work just fine without any audit stuff. The
issue is a more a bug in systemd that should be fixed, not audit be
required.

Poettering:
Quote:
So, yeah, let's just fix the audit issue and that's it.


Seems as if Systemd developers do closely follow demands of some special Gentoo forum users :/
But this will naturally increase complexity ...
Back to top
View user's profile Send private message
mv
Watchman
Watchman


Joined: 20 Apr 2005
Posts: 6747

PostPosted: Fri Feb 22, 2013 9:48 pm    Post subject: Reply with quote

ulenrich wrote:
You really, really assume and think:
A developers wild guess on the state in the boot process will be the sane and proper decission for all systems out there in the wild?

In a deterministic booting order there is no wild guess about the state - it is systemd which wants nondeterministic order and thus makes wild guessing a principle, making it a mandatory requirement that wild guessing will succeed in all states on all systems, thus excluding all easy solutions.
Quote:
And Systemd is all about automated dependencies.

Exactly, "automated" (in contrast to well-defined and deterministic). This might be reasonable after some well-defined state, but systemd's mistake was to assume that this is reasonable from the very beginning. And this is why systemd falied to be able to start a system and had to rely on more and more "fixes" - which are actual complete rebuilds of fundmantal infrastructure. It can start a system only if that system transforms before into something which systemd is able to start.
Quote:
The simple case just works, but for all specials this means additional complexity! Yes it is, this is reality

The simple case just worked before, and for more complicated/special cases some adjustment was necessary. This is what we had.
Now we have in the simple case a superfluous layer not needed for that case but eating permanent resources and needing heavy additional care for security considerations. And in the complicated case of course also the requirement to deal with details of the new complex layer, often without actually having any advantage by the additional layer (as just in the mentioned example: in the best case it does not hurt).
Back to top
View user's profile Send private message
ulenrich
Veteran
Veteran


Joined: 10 Oct 2010
Posts: 1480

PostPosted: Fri Feb 22, 2013 10:28 pm    Post subject: Reply with quote

@MV, first of all, excuse me to not answer all of yours, you are far too quickly responding for me. I am german, not fluently speaking english language.

I don't understand your points above: Systemd not beeing about dependencies?
For what I know Systemd began with exploiting dependencies of LSB-headers. Insserv was a project largely maintained by Werner Fink of SUSE. This was adopted by Debian. By which I came into contact of it.

mv wrote:
ulenrich wrote:
A developers wild guess on the state in the boot process will be the sane and proper decission for all systems out there in the wild?
In a deterministic booting order there is no wild guess about the state - it is systemd which wants nondeterministic order and thus makes wild guessing a principle

All init systems if not virtual require hardware, which can fail or be somehow undeterministic. Another placement of an executable on the harddisk may result in a later activation. I wonder how anybody in the real world can talk about deterministic systems. If all is perfect then there will be "Heisenberg Unschärfe", which will come into play when photonic computations will be there.
On the contrary systemd will consider all events and conditions. It will "deterministicly" react to these. I might miss your point in total, but I think you are talking about "perfect world" ideas.
Quote:
Exactly, "automated" (in contrast to well-defined and deterministic). This might be reasonable after some well-defined state, but systemd's mistake was to assume that this is reasonable from the very beginning. And this is why systemd falied to be able to start a system and had to rely on more and more "fixes" - which are actual complete rebuilds of fundmantal infrastructure. It can start a system only if that system transforms before into something which systemd is able to start.

"It [systemd] can start a system only which systemd is able to start."
This is a kind of ever true sentence. Needs no discussion :)

"why systemd falied to be able to start a system"
Never experienced systemd failing ...

Quote:
Now we have in the simple case a superfluous layer not needed for that case but eating permanent resources
Systemd eats much less resources than all of
syslog+cron+xinet+consolekit+all-shells-runing-scripts

Quote:
and needing heavy additional care for security considerations.

security - Would you mind to examplify?
The lightweight virtual abilities are just emerging. It will be a huge security gain to easily start your browser in a cave. Systemd is famous of its ability to keep track of all threads.

Quote:
And in the complicated case of course also the requirement to deal with details of the new complex layer

- Is there really an additional layer or is ther much less diverse infrastructure?
- Systemd unit creation is much simpler than bash scripting. I would need three days to learn to unit-script the network problem of above - by then having learned 60 percent of systemd.

To estimate the cost of learning all of systemd, you would on the other side sum up the learning of
syslog+cron+xinet+consolekit+shell
Back to top
View user's profile Send private message
mv
Watchman
Watchman


Joined: 20 Apr 2005
Posts: 6747

PostPosted: Sat Feb 23, 2013 11:08 am    Post subject: Reply with quote

ulenrich wrote:
I am german, not fluently speaking english language.

Me, too.
Quote:
I don't understand your points above: Systemd not beeing about dependencies?
For what I know Systemd began with exploiting dependencies of LSB-headers.

systemd is about checking conditions (is port X available?) instead of dependencies (Service Y depends on Service X).
Quote:
All init systems if not virtual require hardware, which can fail or be somehow undeterministic.

Yes, any starting system can fail if a bad hardware exception occurs. The difference is that, ignoring hardware defects, a deterministic system works reliable if it is tested once. This is very important for a booting system, because in practice you can only make a few tests, and then the thing has to work reliably even if you are hundred kilometers aways. In systemd you perhaps overlooked some conditional which however worked by accident the 30 times you tried, and then when you must rely on it booting and you are 500 km away, it will fail.
Quote:
Another placement of an executable on the harddisk may result in a later activation.

That's why a proper init-system should not depend on such random factors and start services in a deterministic order.
Quote:
I wonder how anybody in the real world can talk about deterministic systems.

"deterministic" is a reasonably well-defined term in computer science. It essentially comes done to sequential vs. parallel execution. The latter has so many disadvantages that it is only justified in very exceptional cases where saving the time is worth all the problems it causes.
Quote:
On the contrary systemd will consider all events and conditions. It will "deterministicly" react to these. I might miss your point in total, but I think you are talking about "perfect world" ideas.

Quite the opposite is true. I hope that this is clearer by the above.
Quote:
"It [systemd] can start a system only which systemd is able to start."
This is a kind of ever true sentence. Needs no discussion :)

You completely miss the point. A good init-system should be able to start almost any system by appropriate configuration. systemd, in contrast, is not able to do so: It needs the whole system adapted to it (with hacks like initramfs, /run, /usr already mounted etc. pp) instead of the opposite.
Quote:
"why systemd falied to be able to start a system"
Never experienced systemd failing ...

So? Then try to configure systemd to mount /usr without a ramdisk before doing anything else. And mounting /var before starting any daemons which might require /run. Both should be easy tasks for a universal configurable starting system capable of dependencies, shouldn't it?
Quote:
The lightweight virtual abilities are just emerging. It will be a huge security gain to easily start your browser in a cave.

A cave consisting of interacting complex daemons is a lot worse a security issue than it aims to solve. And the user then has the false feeling of "security" which makes things even worse! If you want safe browsing, do it with an unprivileged user and do not run daemons capable of raising privileges of such users!
Quote:
Systemd is famous of its ability to keep track of all threads.

So what? Why should I need to keep track of a process which just runs without privileges? To suddenly allow it something I should never allow?
Quote:
Quote:
And in the complicated case of course also the requirement to deal with details of the new complex layer

- Is there really an additional layer or is ther much less diverse infrastructure?
- Systemd unit creation is much simpler than bash scripting. I would need three days to learn to unit-script the network problem

I do not want to waste my time to reply to this obvious nonsense. If you cannot see the additional layer, open your eyes. And dealing with that layer is not about learning systemd unit language. For instance, systemd's ignoring of "noauto" in /etc/fstab is such an issue which cannot be solved by leaning systemd units: You first have to rewrite a lot of the C-code to fix the implicit assumptions inappropriate for a universal booting system. However, I will not continue to waste my time on discussing broken concepts.
Back to top
View user's profile Send private message
steveL
Watchman
Watchman


Joined: 13 Sep 2006
Posts: 5153
Location: The Peanut Gallery

PostPosted: Sat Feb 23, 2013 6:59 pm    Post subject: Reply with quote

ulenrich wrote:
Quote:
Now we have in the simple case a superfluous layer not needed for that case but eating permanent resources
Systemd eats much less resources than all of
syslog+cron+xinet+consolekit+all-shells-runing-scripts

This is nonsense too: you ignored the word "permanent". Even if a shell script executes a load of external POSIX utils (and most don't, only a few greps for the complicated cases) so what? For a start, we're on Unix, so process forking is trivial; secondly the administrator can optimise this by using a lighter shell like mksh or busybox, which has grep built-in, or runs the same process as a multicall binary for complicated cases and can default to /bin/foo which it does for ed scripts it doesn't understand (and again, not many people use ed, and even if a fork is required, it's cheap on Unix.)

I've been running with /bin/sh linked to /bin/bb for a couple of months, and apart from 3 minor patches to initscripts which I'll submit to bugzilla since openrc devs care about working with all sh, it was a doddle -- bootup speed is much faster, and more importantly so is the rest of the system, since all sh scripts are now run by bb, which is much leaner than bash running as sh. A consequent of modularity which now just makes me laugh at the systemd effort to speedup boot: what a colossal dead-end strategy.

More significantly, those shell processes don't hang around consuming resources. Having syslogd and cron in separate processes is to be preferred, so you're wrong on the resource consumption front there too. xinetd is simple and minimal C, and its reinvention in systemd is nothing to crow about.

Consolekit is a Poettering invention that is of no use in the real world, and has been shown to be a security hole in combination with polkit already. So yeah that's a waste, but sensible admins don't run either of those in any case, so actually counts against your argument, given its provenance.

You're also missing that the Unix tradition of doing one job and doing it well, results in leaner programs, that can be used in combination: that's the whole point of the Unix philosophy of text interfaces; it encourages applications that can be used in pipelines, where the user decides what each part needs to be doing, not the programmer. ("Code mechanism, not policy.")

You also don't really appreciate how far that permeates Unix, imo. It comes from the bottom up, from the original Unix code and the tradition of sharing and patching source-code by which Unix was developed: have a read of "The Unix Programming Environment" (Kernighan & Pike, 1984) if you want to start to understand the mentality. "Software Tools" (Kernighan & Plauger, 1976) is the other one to read; both are two Unix masters showing how C and the Unix io model, that became the basis of C stdio, makes for a very nice place to work. And along the way they implement an amazing amount of stuff with simple, clean code.

If you want to see the original code that inspired everyone, get hold of "Lions' Commentary on UNIX 6th edition with Source Code" by John Lions (Peer-to-Peer); reading that along with "The Design of the Unix Operating System" Bach (1986) will give you a much deeper understanding of OS implementation than any CS course.
Quote:
Quote:
And in the complicated case of course also the requirement to deal with details of the new complex layer

- Is there really an additional layer or is ther much less diverse infrastructure?

As mv said, if you can't see the additional layer, open your eyes.

Further, "diversity of infrastructure" is simply a consequent of modular programming applied to program design as well as internal architecture: that's what "do one job and do it well" is all about. It's something we cherish (for instance it means I can edit the /bin/sh symlink.)

It means administrators are free to swap components in and out as required. Note that this isn't always, or even usually, about performance: it is often the only way to ensure correct operation; again "Mechanism not policy" encapsulates this idea. Only the user knows their own requirements. As a developer of software you will by definition not be present when it is in use, so humility is required to make useful software. Not the 22-year-old "I know best" attitude of a young undergraduate, who traditionally would have been knocked into line within a software house or IT division of a company, or several, over a few years, before being allowed anywhere near architectural decisions. Typically you'd need about a decade of experience even to be in the room.
Quote:
- Systemd unit creation is much simpler than bash scripting.

Bash is not sh, and is a lot more complex if you use all the bash-specific things: most of those are designed for more complicated admin scripts, and are not needed for most simple utilities, nor the things that get done in initscripts.

It's funny how you keep bringing up irrelevant things that don't have anything to do with the discussion.
Quote:
To estimate the cost of learning all of systemd, you would on the other side sum up the learning of
syslog+cron+xinet+consolekit+shell

The thing you're missing is that sh has been around since the beginning of Unix; reading UPE will give you a better understanding of this. POSIX sh is standardised, and has been for a long time; local is specified as a requirement of Debian for any system sh, so there is a level of flexibility that most miss.

And now you have polkit realising that oh wait, perhaps we don't always know best, let's incorporate a javascript interpreter to do the same things that sh does everywhere else: allow the admin as much flexibility as required, only now it's in a whole new language that most sysadmins don't use on a daily basis, the webdevs do that, using an object model they've never had to even worry about before which can change at any point since upstream aren't big on backward compatibility. Result? things we can't administer with confidence, that require retraining and with no guarantees at all about security or reliability as it's a whole new language and setup, and upstream are basically crap.

If you take consolekit out of your equation, since it isn't a traditional tool, nor written with respect for the philosophy and as a result it's crap we don't use, all you're really left with is learning cron syntax which admittedly is a little obscure. It has the advantage of requiring only a minimal lexer and not a parser, which is actually something you want for a system cron, and additionally the advantage of being very simple to read and write, once you've looked at the manpages. Most of the "obscure" syntaxes in Unix config stem from this same decision to keep implementation simple, and thus can be simply written and read, using the manpage when starting out. Once you understand that, they're very basic; though of course more modern tools use things like INI-like syntax.

syslogd and things like it use pretty traditional Unix config files afair: so yes, Unix requires you to read some information before you make system-wide changes. Personally I think that's a good thing, and most Gentoo users appreciate simple clean config files that are as close to upstream as possible, along with a copy of the original upstream configs plus documented examples. TBH I can't even remember the last time I edited a syslog conf. And it Just Works already! ;)

If you want an idiot-box, configure it and administer for the end-user. You end up having to do the same on Windows, but there you have a lot more hassle fighting the OS, that thinks it knows better than you do. You end up having to learn a whole lot more, weird new ways of doing everything that change every couple of years and don't really work all that well in any case (sound familiar?) And really that should be the warning that you listened to a long time ago: all the Poettering stuff of the last few years has basically been presented in order to make idiot-boxes (that can be monetised for his employer.)

Windows HAL -> hal then udev, system32 -> systemd, registry -> polkit confusion, when plain PAM is a far superior solution, XML in hal/udev and dbus bringing it into the early boot sequence when there was zero need for it, the tendency has been toward making the clean configuration of Unix into a bastardised opaque setup that only Registered sucker^W Administrators can use, which along with "Secure" Boot makes for a perfect channel for DRM, ie monetising the income stream of users. If you don't think a corporation would have any interest in doing that, you're far too naive. If there's money to make, they have to exploit the "opportunity": anything else would be a betrayal of their "fiduciary responsibility to shareholders" which justifies everything they can get away with.
Back to top
View user's profile Send private message
_______0
Guru
Guru


Joined: 15 Oct 2012
Posts: 521

PostPosted: Sat Mar 02, 2013 8:14 pm    Post subject: Reply with quote

ei!! I have a better idea!! Somebody port Gentoo profiles such as desktop, etc, with ALL necessary apps into ONE SINGLE binary for download.

By the way, "just works (TM)" is contrary to Gentoo way. I see systemd better fit for different audiences such as ubuntu, arch seems fond of systemd as well.

As someone mentioned earlier about *nix philosophy and software tools as analogy it would akin to looking at Linux From Scratch and argue that there's a problem with it because too many steps involved to install LFS.
Back to top
View user's profile Send private message
_______0
Guru
Guru


Joined: 15 Oct 2012
Posts: 521

PostPosted: Sat Mar 02, 2013 8:29 pm    Post subject: Reply with quote

steveL wrote:
the above text


Wow, brilliant and highly eloquent reply with very structured thougt. Enough to crush any further systemd propaganda. However only ppl who understand what you're reffering too will understand, the rest will keep arguing ad /dev/infinitum.
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54028
Location: 56N 3W

PostPosted: Sat Mar 02, 2013 8:31 pm    Post subject: Reply with quote

_______0,

What you want is called a stage4. There are several around.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
Navar
Guru
Guru


Joined: 20 Aug 2012
Posts: 353

PostPosted: Sat Mar 23, 2013 7:55 am    Post subject: Reply with quote

_______0 wrote:
steveL wrote:
the above text


Wow, brilliant and highly eloquent reply with very structured thougt. Enough to crush any further systemd propaganda. However only ppl who understand what you're reffering too will understand, the rest will keep arguing ad /dev/infinitum.


*cough* While as a lowly undergrad I had the Modern Operating Systems book by the mere A. Tanenbaum (y'know, the guy Linus originally thumbed his nose at) of MINIX, etc. fame, I still groked the well written exposé SteveL gave. :P Well, for as much as I dare care anyway. Also had a second semester where the professor was Dr. P. Juell, one of the OS designers for Wang Laboratories (again, most people here are perhaps not old enough to know the company).

Easy to side with what SteveL mentioned, but not so much the ambiguity spouted from the systemd fan for the sake of perceived parallelism conceptualized during bootup in some vain attempt to decrease boot time. But it is unsurprising considering the shear amount of change that occurs in the Linux world for almost 2 decades. As far as old argument *nix file system layouts, was there ever truly an agreed upon standard? Please don't say FHS. Seems to me that was as old a pissing contest of viewpoints as the editor wars. In short non-deterministic decision problems can tend to suck and one shouldn't go off looking for them willy-nilly without probable cause. The whole initr(d/amfs) has always seemed like such a band-aid fix to begin with.

I'm greatly looking forward to some giant convoluted database system scheme to take over everything they've placed into wrappers at some point via RH/Poettering and co. Maybe they'll call it MCP or RHCP. Oh wait, Redmond already invented that gem. Have to give them credit for at least using XML instead of some proprietary binary format (remember, it can always get worse...)

Personally I think you're both going about this the wrong way. Just pull some strings and get ol' Poettering hired at Apple or Microsoft. Several problems solved, possibly more money and fame for him and far far more fun to watch after the smoke clears. :twisted: Or even better, they'd make him disappear into disillusioned obscurity like so many others before him. I'm thinking Metro would be a grand start... or the iPoet.

However, I'm almost afraid to hear what you'll say next of W. Richard Stevens on Unix networking. :)

Aside: to anyone not understanding just what in the hell I'm going on about, those two are kind of a long time defacto standard for authors on computer science courses on operating systems and networking at well accredited universities. The Linus vs Tanenbaum debate in the early 90s on USENET is somewhat akin in philosophy with Poettering/RedHat ways vs old school everyone else. I sometimes wonder, now 20 years later, if Linus hasn't perhaps reconsidered given the size of today's kernel and complexity.
_________________
Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn.
Back to top
View user's profile Send private message
steveL
Watchman
Watchman


Joined: 13 Sep 2006
Posts: 5153
Location: The Peanut Gallery

PostPosted: Sun Mar 24, 2013 2:26 am    Post subject: Reply with quote

Navar wrote:
The Linus vs Tanenbaum debate in the early 90s on USENET is somewhat akin in philosophy with Poettering/RedHat ways vs old school everyone else.

Not really, unless you're casting Torvalds as the old-school, which you don't seem to be, given the ordering. Tanenbaum was advocating the micro-kernel approach, which has been favoured in academic circles for quite a while, whereas Torvalds stated it's all very well, but pragmatic concerns (principally the cost of a context-switch) make the traditional fat kernel a better approach. So Torvalds was not at all in the same position as Poettering et al, claiming that some new approach is the One True Way (it's so new, we've seen it all before: we even have a phrase for it.)

In fact he was standing up for the traditional method, based on pragmatism. A bit like he stated that something being around for 30 years is not a reason to change it. If anything, it's an argument for keeping it, as it means it's proved its worth (or "damn, that thing worked.")
Quote:
I sometimes wonder, now 20 years later, if Linus hasn't perhaps reconsidered given the size of today's kernel and complexity.

I doubt it: modularity does not require separate objects; process isolation does. Modularity just needs a conceptual and API demarcation, and discipline.

An appreciation of clean code (which used to be natural in Unix since you had to compile the code to run it, so reading it in order to patch it became a shared experience and knowledge-base), and humility borne of experience lead to the latter. That humility, along with decades of best practice, is what the Unix "philosophy" encapsulates, usually via short sayings that stick in your head when you code (one of my favourites is: "Do the simplest thing that could possibly work.")
Back to top
View user's profile Send private message
Navar
Guru
Guru


Joined: 20 Aug 2012
Posts: 353

PostPosted: Sun Mar 24, 2013 6:17 am    Post subject: Reply with quote

Sorry, I didn't mean to compare the age of the arguments for determining relevance, the order and scope of which (to me) were somewhat irrelevant. Microkernels were conceptualized in the 80s, the beginnings of the personal computer era and what they were initially aimed at. What we're still using now on a grander scale. And the non-trivial issues of device drivers, file systems and protocols being fought with adding into the monolithic side of things remain headaches today except now we add the (not new) issue of parallelism. There's rarely something new in concept, just the usual old 'new' thing re-presented.

While encouraged academically because Tanenbaum wrote well (including details about monolithic in scope among many other topics), having implemented the concepts in practice to play with was far more valuable than just reading about. MINIX and the microkernel movement still had active and applied real world use, Mach, Amoeba, UNICOS (which was good enough for D. Ritchie and Cray), etc. My point was, anyone with a pulse who paid attention, within academia knowledge base or not, should have understood your prior post.

Besides the fact that MINIX inspired Linus to produce the linux kernel, the overall OS concepts were the same, just handled different. I suppose other than on embedded devices like phones (Symbian/Psion EPOC) it remains an 80s fad. Fair enough, I'm not advocating either direction; however, even Linus himself on the premise of Unix philosophy (small, do one specific task and do it well) stated the microkernel approach was a good idea. But then later on essentially says outright that due to the performance overhead (distributed algorithms e.g.) and lack of ease to write and maintain that they are essentially 'crap' once you want to do more than simple things. A concept versus designed implementation issue with a main theme on access space (separation or not).

Which sounded a bit like someone's systemd wrapper commentary (udev merger, etc.). Pushing for doing things in an event driven asynchronous way (complex, which just sounds like they're copying Canonical's upstart to me) rather than synchronous dependency based (simple). And then for some reason marrying udev into it. I personally don't see a reason why to join at the hip the init process with a hotplug capable generic device manager, but what do I know. Which ultimately overall becomes a very old argument of simple vs complexity in approach. That was where my somewhat akin analogy came in.

I thought udev was a good thing, really, even if the rule files sometimes were problematic. Even HAL seemed ok for the most part but maybe I wasn't back into the Linux scene long enough to know. Somewhere in all the drama that I honestly hadn't fully vested reading into, I lost where GregK, from kernel side, stopped being involved with udev with Sievers and Poettering taking over. Is that ultimately what began this mess--RH trying to one up Canonical's lead?
_________________
Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn.
Back to top
View user's profile Send private message
mv
Watchman
Watchman


Joined: 20 Apr 2005
Posts: 6747

PostPosted: Sun Mar 24, 2013 7:50 am    Post subject: Reply with quote

I do not agree with these comparisons: The Tanenbaum-Linus discussion was not a discussion about high complexity vs. low complexity. It was a discussion about two approaches for which one has the disadvantage of high complexity while the other has the disadvantage of a time overhead.
The GnomeOS approach is different: It gives you both disadvantages simultaneously but claims it is better "because it is new", forgetting that it is new only because no sane person would implement something whose main "advantage" is being more complex and slower.

Take consolekit: Why can't you just define in a certain defined file who is logged into X? Why must you have daemons running all of the time just for the rare case that a program needs to access the database?

Take policykit: Why can't you just modify the device permissions when people log in and out - instead having running a daemon permanently and make all accesses to devices slower by wrapping them over the daemon?

Essentially, the idea is just to make a simple task complicated. Afterwards some "applications" are invented, e.g. "fast user switching is not so easy without this approach". Maybe, but who needs fast user switching? And if you do not need it why should you take all this complexity and slowdown?
The disadvantages then become clearly visible in things like NetworkManager: Prior to policykit you would have running NetworkManager as its own user/group and you would have to change the permissions of all network-related files (/etc/resolv.conf etc. pp) to allow that user and/or group to modify them. This would be a powerful daemon but nevertheless its possibilities are clearly limited in case of any program bugs or exploits; essentially it could be misused in case of a bug to do anything with your network as he wants, but that's it. Nowadays with a misconfiguration or bug in policykit, you cannot even be sure what are the limitations of that daemon. And please don't say, such misconfigurations and bugs do not happen: They happened and still happen for less complex systems, and they certainly happen for more complex systems.
Quote:
I thought udev was a good thing, really, even if the rule files sometimes were problematic. Even HAL seemed ok for the most part

I completely agree with this. The main problem with HAL was that it used xml as "main language": This is rather over-the-top but typically for Poettering: This was just new and hyped, so by Poettering's definitions better than anything else. That it is inappropriate for human-readable configuration and unnecessary for flat data structure like hardare devices was completely ignored - but obviously the GNOME guys never learned it until today. (Don't get me wrong: xml has its use. For example, I think that the --xml switch in eix [which was originally not implemented by me] is a good idea: For interprogram-communication it is a nice interface, and even more for tree structured data [the tree contains categories which contain packages which contain various versions {available or installed} which in turn contain various metadata depending on the version type] - even in case the data must be human-edited, xml is here an appropriate format, because this complexity is needed.)
Quote:
Which sounded a bit like someone's systemd wrapper commentary (udev merger, etc.). Pushing for doing things in an event driven asynchronous way

I think you mean my comment. I am not in principle against an asynchronous way: For many things (interactive programs, also hotplugging) this is certainly appropriate. However, it is inappropriate for the startup process: The startup process should be easy to test and then work reliable and reproducable. This is simply due to the way how computers are used in some cases. For example, I administer my parents' machines who are 400 miles away. Sometimes, when I visit them, I make some complex changes to the machine and then test it. Now if the machine came up only by accident, because some race condition was met, and does not come up the next time it is a catastrophe because then I will not be able to change something over net. Similar things happen also in a more professional setting: You just have to rely that booting works. Making booting artificially non-reproducable is the worst you can do concerning the boot process - all efforts should be directed into the opposite direction to what Poettering is doing to this process. Once again: After the boot process an asynchronous way is fine - but keep this where it is appropriate.
Back to top
View user's profile Send private message
Anon-E-moose
Watchman
Watchman


Joined: 23 May 2008
Posts: 6095
Location: Dallas area

PostPosted: Sun Mar 24, 2013 10:16 pm    Post subject: Reply with quote

My main problem with hal was that as soon as it was embraced by the masses (against the will of many)
it was dropped because "it was too complicated".

Before consolekit/policykit, we had devkit and other stop gap measures.

The problem is these things don't seem to be well thought out ahead of time,
but seemingly based around "I have an idea" then it's sold to the masses
without a well thought out plan of how to get there, thus many bumps along the way.

Quite frankly if I wanted a one-size fits all solution, then I would still be using windows/apple/ibm/wang/whoever exclusively.

Just my thoughts


Edit to add: As to who was more right linus vs tanenbaum I'll let the future decide that.
But I do note that outside of toy systems I don't see minix being used,
while linux is used in a fair amount of the commercial world as well as toy systems. :)
_________________
PRIME x570-pro, 3700x, 6.1 zen kernel
gcc 13, profile 17.0 (custom bare multilib), openrc, wayland
Back to top
View user's profile Send private message
mv
Watchman
Watchman


Joined: 20 Apr 2005
Posts: 6747

PostPosted: Mon Mar 25, 2013 10:05 am    Post subject: Reply with quote

Anon-E-moose wrote:
Edit to add: As to who was more right linus vs tanenbaum I'll let the future decide that.
But I do note that outside of toy systems I don't see minix being used,
while linux is used in a fair amount of the commercial world as well as toy systems. :)

Future cannot decide technical questions. This is more a social than a technical question: There have been invested much more man-years into linux than into microkernels (not sure whether minix was already using a microkernel; moreover, minix is not only the kernel, and it was intended to be a toy-only system). Experience shows that there is no correlation between the popularity (by this I mean not only user popularity but also how much effort is invested into it) and technical superiority. Otherwise some commercial OS's would have been gone long ago.
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Gentoo Chat All times are GMT
Goto page 1, 2  Next
Page 1 of 2

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum