Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
systemd bad for dev and getoo?
View unanswered posts
View posts from last 24 hours

Goto page Previous  1, 2, 3, 4  Next  
Reply to topic    Gentoo Forums Forum Index Gentoo Chat
View previous topic :: View next topic  
Author Message
mv
Watchman
Watchman


Joined: 20 Apr 2005
Posts: 5673

PostPosted: Mon Jul 07, 2014 3:00 pm    Post subject: Reply with quote

skarnet wrote:
s6-svc is simply a command to send signals to a process managed by s6.

skarnet, since you seem to know s6 very well :wink:
I know that s6 was not developed for that purpose, but maybe s6 could be used to solve the problem described here?
More precisely, I want to "start" various jobs (for different users in different shells, maybe even in differen chroots), but their execution should actually be postponed until some sort of "signal" is sent to these tasks. When they are finished they should report back their exit status.
I know that this differs from what one wants for an init system, but your description reads as if s6 could be able to do this as a "side" result (no matter which init system one uses). Do I understand this correctly?
Back to top
View user's profile Send private message
skarnet
n00b
n00b


Joined: 01 Jul 2014
Posts: 18
Location: Paris, France

PostPosted: Tue Jul 08, 2014 12:28 am    Post subject: Reply with quote

You could use a supervision tree to monitor your jobs, yes. However, what you're looking for sounds like a complete dependency system, with logical operators, sequence operators, etc. Like OpenRC, such a thing would be better implemented on top of s6, which provides process supervision, notification tools, and hooks to implement dependency management, but not dependency management itself.
_________________
Laurent
Back to top
View user's profile Send private message
mv
Watchman
Watchman


Joined: 20 Apr 2005
Posts: 5673

PostPosted: Tue Jul 08, 2014 6:16 am    Post subject: Reply with quote

skarnet wrote:
You could use a supervision tree to monitor your jobs, yes.

Even user started ones, running in a terminal (with stdin/out/err connected to a terminal)? The examples mentioned in the s6 documentation are always daemons which is not what I have in mind, here... I mean ordinary commands like e.g. emerge ..., make ..., octave ... etc.
Quote:
However, what you're looking for sounds like a complete dependency system, with logical operators, sequence operators, etc. Like OpenRC

No, like a primitive shell script using a few simple commands of the meaning "s6, now start process x", "s6, now start process y", "s6, wait until process x is finished", "s6, is process y already finished, and what is its exit status?".
If tasks like the above are only a simple command, I could just hack up a primitive shell script "on the run" whenever I want something (and possibly stop the script and replace it by a different script, if I decide for some reason to change the dependencies completely while the things are still running...)
It seems to me that s6 is indeed what I was looking for...
Back to top
View user's profile Send private message
skarnet
n00b
n00b


Joined: 01 Jul 2014
Posts: 18
Location: Paris, France

PostPosted: Tue Jul 08, 2014 9:04 am    Post subject: Reply with quote

It is possible to make s6 control short-lived processes. It's not the typical use case, and I'm definitely not claiming that it's the best tool to do so, but it's possible. And there are indeed simple commands to tell it "start X", "start Y", "wait until X is finished" and "report Y status", yes.
The devil, as often, is in the details. You need some glue to make it work, and the glue might turn out not to be so simple after all: you'll end up doing a lot of scripting. For instance, the above sequence of 4 commands would look something like:

# infrastructure
mkdir -p scan/X scan/Y scan/.s6-svscan
ln -s /bin/true scan/.s6-svscan/finish
touch scan/X/down scan/X/nosetsid scan/Y/down scan/Y/nosetsid
s6-svscan -t0 scan & # start the supervision tree
echo "#!/bin/sh" > scan/X/run ; echo "#!/bin/sh" > scan/Y/run
echo "command_line_for_X" >> scan/X/run ; chmod 755 scan/X/run
echo "command_line_for_Y" >> scan/Y/run ; chmod 755 scan/Y/run
cat > scan/Y/finish <<EOF
#!/bin/sh
echo "$1" > exitcode
echo "$2" > exitsignal
EOF
chmod 755 scan/Y/finish
s6-svscanctl -a scan # make the supervision tree aware of X and Y

# start X
s6-svc -o scan/X

# start Y
s6-svc -o scan/Y

# wait until X is finished
s6-svwait -d scan/X

# poll for Y completion
if s6-svstat scan/Y | grep ^down ; then EXITCODE=`cat scan/Y/exitcode` ; EXITSIGNAL=`cat scan/Y/exitsignal`

# cleanup
s6-svscanctl -6 scan # kill the supervision tree
rm -rf scan

ignoring the small race conditions (you'd need to wait until the supervision tree is all set before starting X; you'd need to make sure X has been started before waiting for its completion; you'd need to change scan/.s6-svscan/finish to notify your script that it's safe to remove scan/ ; etc. There are s6 primitives to avoid such race conditions, but the final script would look a lot more complex.)

My point is that the functionality you want is all in the scripting around s6 primitives, and since you're using a supervision infrastructure for something else than its original purpose, it's a bit cumbersome to use. Sure, you have a rock-solid basis that monitors your commands, but given the amount of scripting you have to do around it, especially if you want to implement a dependency management system, I'm not convinced that you'd be better off using s6 than designing your own enterprise job scheduler from scratch. It's a possibility, but I'm always wary of using a hammer to solve an issue that doesn't quite look like a nail just because I happen to have a hammer: this is the kind of thinking that leads to nonsense such as "D-Bus activation", where a bus substitutes itself to a supervision tool. (The right design is: have a bus, have a supervision suite, and write a little glue to make them work together - for instance a bus client controlling the supervision suite.)

The most useful contribution from s6 to your project would probably be the s6-ftrig-* commands, which you could use to avoid race conditions, set up notifications, and wait for job completions without polling.
_________________
Laurent
Back to top
View user's profile Send private message
mv
Watchman
Watchman


Joined: 20 Apr 2005
Posts: 5673

PostPosted: Tue Jul 08, 2014 1:06 pm    Post subject: Reply with quote

Thanks for your information and script snippet - I think I can find out the rest by myself.
skarnet wrote:
You need some glue to make it work, and the glue might turn out not to be so simple after all

In the moment it appears to me that the only only glue would be a script "schedule cmd" which generates an appropriate scan/... directory for "cmd" and calls s6-svscan. All of the rest (in a custom "manager" script) are perhaps just straightforward calls to s6 without needing any glue there (at least, I hope).
Quote:
ignoring the small race conditions

I know: races were the main technical difficulty when writing the small "starter" script mentioned in the other thread.
Quote:
you'd need to wait until the supervision tree is all set before starting X

That's a non-issue: Of course, I would expect that all commands I want to manage with the "manager" script have already run their "schedule" before the manager script is even started.
Quote:
you'd need to make sure X has been started before waiting for its completion

This, however, I did not expect. You mean that (even after all the scan/ directory is prepared with the schedule script) a call like
Code:
s6-svc -o scan/X

immediately followed by
Code:
s6-svwait -d scan/X

might fail, because X is not yet "up enough" to signal that it is running? This would indeed be a problem.
Quote:
My point is that the functionality you want is all in the scripting around s6 primitives, and since you're using a supervision infrastructure for something else than its original purpose, it's a bit cumbersome to use.

I agree, but it is so far which comes closest to what I want from everything which I have seen.
Back to top
View user's profile Send private message
ballsystemlord
n00b
n00b


Joined: 26 Feb 2013
Posts: 66

PostPosted: Wed Jul 09, 2014 1:59 am    Post subject: Reply with quote

Shamus397 wrote:
What's wrong with wpa_supplicant? It even comes with a GUI tool, wpa_gui. It's light years better than NetworkManager, that's for sure. :)


Thanks, I'll mention this to others.
Back to top
View user's profile Send private message
depontius
Advocate
Advocate


Joined: 05 May 2004
Posts: 3231

PostPosted: Wed Jul 09, 2014 12:31 pm    Post subject: Reply with quote

Since the same people hang out on multiple systemd threads and will see this, this is the most appropriately-titled thread for this.

Assertion: However good/bad systemd may be for developers and/or Gentoo, it's worse for distributions.

Nor can we talk about this without talking about "Policy" in general, no doubt references to PolicyKit specifically.

A Distribution has many jobs - things like selection, packaging, and maintenance, but another of those jobs is policy. The Linux kernel specifically tries to avoid enforcing policy, believing it belongs in userspace, which is most likely a correct decision, as long as the kernel has policy enforcement mechanisms, which it does. There are even "good software practices" that try to separate code/mechanism from policy, and I also tend to think that that's a good idea, too.

But the real upshot of all of this is that a lot of people are "Trying not to touch policy with a 10-foot pole." Most programmers would rather program than do policy, so the avoidance is understandable. Policy is also a very important function, because in many respects it dictates the security of the system. All of the security mechanisms in the world are worthless if they aren't correctly used, and that is the function of Policy. That also winds up being another job of the distribution. However unless it's a specialized security-oriented distribution, I get the impression that most distributions don't really want to touch Policy, either. Policy is a bit of an orphan.

Start with this situation, and enter freedesktop.org. As near as I can tell, they like the Windows XP desktop policy model, and believe that the path to Linux desktop dominance is to copy it. I believe that is the root of PolicyKit, and some of this initial attitude behind systemd. The classic Linux desktops have never really embraced this need, and they are moving into a void, making it their own. The way they are doing so is also really annoying to old-school Linux/Unix types. But PolicyKit offers distributions the option to "ignore messy policy stuff" and move on to fun stuff like package selection/configuration, etc. It gives them an easy way to surrender messy security stuff to upstream.

But it's necessary to recognize that the Policy Void exists, and PolicyKit/systemd are filling a real need. Personally I think that something like PolicyKit is a good idea, and the right thing to do, though I certainly agree that there are problems with it. Others have talked about proper sudo configuration and such, and perhaps that can be part of the solution. But the essence here is that PolicyKit offers something close to a complete turn-key solution, and sudo doesn't. It may only be a matter of packaging and combination with other tools - but that packaging setup isn't done. Imagine for a moment the "NotPolicyKit" package that requires sudo and provides an /etc/sudoers file, as well as a set of sudo wrapper scripts, and perhaps some other pieces. Installing "NotPolicyKit" would implement a base system policy that most users wouldn't need to touch in order to have a usable system, but could be tweaked by the user to meet needs.

You can't insist on a void - if you don't like what's currently filling it, you've got to fill it with something else. The thing to recognize here is that PolicyKit and sudo/et-al are not equivalent - but with some configuration sugar they can be made so.
_________________
.sigs waste space and bandwidth
Back to top
View user's profile Send private message
steveL
Advocate
Advocate


Joined: 13 Sep 2006
Posts: 4776
Location: The Peanut Gallery

PostPosted: Wed Jul 09, 2014 1:16 pm    Post subject: Reply with quote

skarnet wrote:
steveL wrote:
It's also a summary of how openrc works, and has worked for quite a few years now. man runscript (s6-svc sounds very similar to rc-update.)

Not quite. rc-update looks like it manages runlevels; s6-svc is simply a command to send signals to a process managed by s6. From what I see in the documentation, the OpenRC architecture is really a dependency management system, which can have several backends; s6 would be suited to be one of these backends.

You're responding to a parenthesis, when the main point is about:
skarnet wrote:
All of the daemon configuration, command line, etc. is contained in run scripts, which is the main additional concept that supervision suites bring

With respect, that's not a new concept since it describes precisely what runscript does, or rather the infrastructure it provides.

Though it's good to get into the detail of what s6 does. You appear to be saying it replaces sysvinit, and stays around to monitor things.
That's cool, that's what I'm on about too.
Quote:
steveL wrote:
It would be cool to have a small daemon handling that (not pid 1 itself, but started from pid1, and staying around) to react to events, but the key is to keep that minimal, and add incrementally to what we already have, without needing to reinvent what's been reinvented very nicely already.

I agree. s6 is not competing with OpenRC in that regard, it's complementary. For instance, you say "a daemon to react to events" and my reaction is "sure, but that daemon needs to be supervised".

Yes, which is why I brought up monitoring.
Quote:
steveL wrote:
In essence that could start from a blank slate since we already have the pieces to call on upon to do all the dependency resolving and service starting.

From what I can see, OpenRC has all it needs to do the dependency resolving, but not the service starting. The daemons are still forked by the "runscript start" processes, which use start-stop-daemon and pid files; in that respect, it is not an improvement over sysvinit. Supervision suites provide a different model, where daemons are not scions of the init scripts, but scions of the supervision tree, which is totally independent from the init scripts. The point is that the monitor ("supervisor") process is the parent of the daemon, which allows precise control - always start the daemon in the same, reproducible environment, send it signals without pid files, instantly notice when it dies, etc.

Yeah you're describing this:
steveL wrote:
The most useful thing it could do is to be a monitor process of some sort, that would ideally fork a monitor-per-service as well, and the top one would be an overall control/monitor pair.

I'm well aware of the parent issue; it's why I sent this link to the ML a few years ago (see Q 3.2)
Quote:
This is exactly what s6 does. The s6-svscan and s6-supervise programs are the monitors you speak of; and the s6-supervise processes are parents of the daemons.

OK, so we're on the same page.
Quote:
steveL wrote:
Perhaps it could have an xinetd equivalent for ephemeral services (that would need to be somewhat customised to the situation.)

Network services are a higher-level consideration, outside of the scope of a supervision suite and even of a dependency management system. They are services, to be handled like any other service by the supervision suite / dependency management.

No they're not, imo; they're services which go down at random points (hence the need for event-based), and as such they affect their dependents: hence the need for the dependency manager to be aware of what's happening with them (so not out of scope.)

It's good that dhcpcd is capable of handling all the low-level events and deciding what to do, eg wrt making traffic use the fastest interface. It makes sense to use it as one part of the overall puzzle.
Quote:
I have a s6-networking package that provides xinetd-like functionality - one daemon per service, which is the most flexible way of doing that. But it's a discussion for another time.

I don't agree: afaic the xinetd-replacement needs to tie into other events, and is part of the overall "supervision suite" since it starts and stops less frequently-used services, and must report back if something goes wrong with something it monitors.

Leaving it out of a discussion on overall design just means we haven't addressed everything that needs to be addressed; after all (x)inetd is the original event-based service manager.
Quote:
steveL wrote:
sysvinit doesn't need to go; though it could potentially simplify to passing signals through, there's no real need to, imo.

The whole point of this discussion is to provide an alternative to systemd. If your proposed alternative is sysvinit, well, why not - it's certainly simpler than systemd and it works, but there are good reasons why people want to switch to systemd in the first place: sysvinit is incomplete and does not make service supervision and management easy.

You're mixing up sysvinit with sysvrc. Or at least that's the only way your argument makes sense to me.
Quote:
I think s6 is a better init choice. OpenRC does not compete with s6, just as it does not compete with sysvinit; it would simply benefit from running on top of s6.

Yeah I'm not so bothered about replacing pid1; more interested in what pid2 does. And there I'd like the event-handling daemon to use rc-update instead of rc-update being run directly from inittab.

I'd like to keep that separation of concerns. Like I said there's then potential to simplify pid1, but again there isn't any harm in keeping it doing exactly what it's currently doing, and it makes sense to allow the additional flexibility. It's always easy to simplify it later on, and a pita to worry about that aspect now, since it's not the area of concern/interest.
Quote:
steveL wrote:
I'd be loath to give up the runscript format myself

And you wouldn't have to.
The contents of the start() and stop() functions, however, would need to change in a s6-based system. Instead of starting and stopping the daemons directly, you'd call s6-svc to tell the supervision tree to start and stop them.

Yeah that's a change of format. Though we can always hide that under a start-stop function.
Quote:
The daemon command line itself would go into a "run script" (not the same thing as OpenRC's runscript program... naming is hard) that would be provided with the daemon package, and would be started by s6-supervise, not the user running the runscript command. This is the key point.

It start to sound too tangled to me; ideally we'd like s6-* to start them based on a similar command-line to ssd (not exactly the same, but conceptually.)

So can I make sure I understand your posts: s6 does supervision and monitoring of processes, but not dependency-management? Thus openrc could, once it's decided what needs to run and in what order, notify s6 about that, and leave it to supervise/monitor. And it'd then make calls back into openrc, eg when a service goes down, which simply runs commands to start/stop/reinit etc w/e is required, and then disappears like it always has. And those commands are calls back into s6, to tell it what needs to happen, and wait or not as required.
Back to top
View user's profile Send private message
depontius
Advocate
Advocate


Joined: 05 May 2004
Posts: 3231

PostPosted: Wed Jul 09, 2014 1:55 pm    Post subject: Reply with quote

steveL wrote:
The daemon command line itself would go into a "run script" (not the same thing as OpenRC's runscript program... naming is hard) that would be provided with the daemon package, and would be started by s6-supervise, not the user running the runscript command. This is the key point.

It start to sound too tangled to me; ideally we'd like s6-* to start them based on a similar command-line to ssd (not exactly the same, but conceptually.)

So can I make sure I understand your posts: s6 does supervision and monitoring of processes, but not dependency-management? Thus openrc could, once it's decided what needs to run and in what order, notify s6 about that, and leave it to supervise/monitor. And it'd then make calls back into openrc, eg when a service goes down, which simply runs commands to start/stop/reinit etc w/e is required, and then disappears like it always has. And those commands are calls back into s6, to tell it what needs to happen, and wait or not as required.[/quote]

So a strawman could be built as a local fork of openrc. It would be changed to be dependent on s6 instead of sysvinit, and in particular the start-stop-daemon command would need heavy editing. Obviously there would also need to be some system configuration to make the system init with s6 instead of sysvinit, but I believe that using s6 instead of sysvinit would accomplish that. Is it almost that simple?

The problem I see here is that a fail may render the system unusable/unbootable, so a handy chroot mechanism would be necessary. It would probably also be a good idea to have an "ebuild install" of everything - particularly the vanilla openrc and sysvinit - handy so that one could go back and forth with a simple "ebuild merge". Something might be done with a unionfs to overwrite part of root, but that would require initramfs fiddling.

Again, does it approach that simple?
_________________
.sigs waste space and bandwidth
Back to top
View user's profile Send private message
steveL
Advocate
Advocate


Joined: 13 Sep 2006
Posts: 4776
Location: The Peanut Gallery

PostPosted: Wed Jul 09, 2014 2:34 pm    Post subject: Reply with quote

depontius wrote:
So a strawman could be built as a local fork of openrc. It would be changed to be dependent on s6 instead of sysvinit, and in particular the start-stop-daemon command would need heavy editing. Obviously there would also need to be some system configuration to make the system init with s6 instead of sysvinit, but I believe that using s6 instead of sysvinit would accomplish that. Is it almost that simple?

Well yes, that's the whole point of modular, encapsulated software.

Though I don't personally see the need to get rid of sysvinit; just put something like s6 (or a new program using some code from it/calling out to it) in the middle, so inittab doesn't start /sbin/rc directly.
Quote:
The problem I see here is that a fail may render the system unusable/unbootable, so a handy chroot mechanism would be necessary. It would probably also be a good idea to have an "ebuild install" of everything - particularly the vanilla openrc and sysvinit - handy so that one could go back and forth with a simple "ebuild merge". Something might be done with a unionfs to overwrite part of root, but that would require initramfs fiddling.

Again, does it approach that simple?

Even simpler if we don't get rid of the baby (sysvinit) with the bathwater (sysvrc); ofc in Gentoo we got rid of sysvrc ages ago, but for people coming to this discussion from other distros, the two tend to be synonymous, at least until the distinction is made clear to them.

Given that we keep sysvinit as pid1, it's easy to edit the inittab from a rescue shell; or one could mod /sbin/rc itself slightly, to run the new mechanism or revert to old, depending on runlevel.
Back to top
View user's profile Send private message
depontius
Advocate
Advocate


Joined: 05 May 2004
Posts: 3231

PostPosted: Wed Jul 09, 2014 3:03 pm    Post subject: Reply with quote

steveL wrote:

Even simpler if we don't get rid of the baby (sysvinit) with the bathwater (sysvrc); ofc in Gentoo we got rid of sysvrc ages ago, but for people coming to this discussion from other distros, the two tend to be synonymous, at least until the distinction is made clear to them.

Given that we keep sysvinit as pid1, it's easy to edit the inittab from a rescue shell; or one could mod /sbin/rc itself slightly, to run the new mechanism or revert to old, depending on runlevel.


So in the even simpler vein, it's really a matter of keeping sysvinit and letting it boot the system. Install s6, but not as the PID1 init daemon.
Don't bother with a forked OpenRC ebuild yet, just save aside start-stop-daemon, and substitute an edited version that uses s6.
At that point switching back and forth becomes as simple as swapping start-stop-daemon between the base and s6-enabled versions. (and rebooting)

If it's really that simple, what are we waiting for, other than spare time?
(Spare time has been in awfully short supply this year. Employer churn, but at least I'm employed.)
Oops, it's a little harder than that - I see that start-stop-deamon is ELF, not a script. Not a show-stopper, just a speed-bump.
_________________
.sigs waste space and bandwidth
Back to top
View user's profile Send private message
skarnet
n00b
n00b


Joined: 01 Jul 2014
Posts: 18
Location: Paris, France

PostPosted: Wed Jul 09, 2014 6:35 pm    Post subject: Reply with quote

You can do that indeed. However, once you have s6 installed, sysvinit becomes redundant. Replacing sysvrc with OpenRC + s6 is the hard (and interesting) part; but once this is done, you won't have a reason to keep sysvinit around.
I wouldn't call sysvinit a "baby". It works, for sure, and it's maintainable, but it's far from perfect. For instance, it polls /dev/initctl every five seconds "to check that it is still there". Yuck.
Eliminating sysvinit can definitely be a follow-up project though, when the OpenRC + s6 infrastructure is running.
_________________
Laurent
Back to top
View user's profile Send private message
skarnet
n00b
n00b


Joined: 01 Jul 2014
Posts: 18
Location: Paris, France

PostPosted: Wed Jul 09, 2014 7:32 pm    Post subject: Reply with quote

(About network services.)
steveL wrote:
No they're not, imo; they're services which go down at random points (hence the need for event-based), and as such they affect their dependents: hence the need for the dependency manager to be aware of what's happening with them (so not out of scope.)
It's good that dhcpcd is capable of handling all the low-level events and deciding what to do, eg wrt making traffic use the fastest interface. It makes sense to use it as one part of the overall puzzle.

I don't disagree with that. What am I saying is that network services should be users of the dependency management infrastructure. They are event consumers (can't start sshd if eth0 is down) and event producers (httpd died, ouch). They do not have to be tied into the design of the dependency management system; they're not part of the same layer as OpenRC and s6. They are on the above layer, they just need to use the event producer interface that the dependency management system provides.

steveL wrote:
I don't agree: afaic the xinetd-replacement needs to tie into other events, and is part of the overall "supervision suite" since it starts and stops less frequently-used services, and must report back if something goes wrong with something it monitors.

I think I see what you mean; we just attach a different meaning to "xinetd-replacement".
I'm not a xinetd user, and am not aware of all its functionality. The model I use is very simple: one process per port, with s6-tcpserver4 (or 6). Enabling or disabling a network service is exactly the same as enabling or disabling any other service, since there's one daemon per network service. Event production follows the same pattern: if a superserver process dies, its supervisor notices and sends the information to the dependency management system.
I have found that to be a lot easier to maintain and evolve than the inetd model, which uses a different paradigm to start or stop a service: editing a configuration file and sending a signal (just like sysvinit, or Upstart, or...)

steveL wrote:
Leaving it out of a discussion on overall design just means we haven't addressed everything that needs to be addressed; after all (x)inetd is the original event-based service manager.

I'm not sure what you mean by that. inetd is a superserver, not a service manager. It's a long-running process that forks a child per connection it receives, that's all. It picks up when its children dies, but that usually means the client has closed the connection, which is normal; there's no notion of service management here. Maybe xinetd does more ? Can you give a concrete example of an event that would require integration of a superserver with a dependency manager ?

steveL wrote:
Yeah I'm not so bothered about replacing pid1; more interested in what pid2 does. And there I'd like the event-handling daemon to use rc-update instead of rc-update being run directly from inittab.
I'd like to keep that separation of concerns. Like I said there's then potential to simplify pid1, but again there isn't any harm in keeping it doing exactly what it's currently doing, and it makes sense to allow the additional flexibility. It's always easy to simplify it later on, and a pita to worry about that aspect now, since it's not the area of concern/interest.

I agree, those are separate issues, and getting rid of sysvinit is a different project from integrating OpenRC and s6.

steveL wrote:
It start to sound too tangled to me; ideally we'd like s6-* to start them based on a similar command-line to ssd (not exactly the same, but conceptually.)

The thing is that the supervision tree must always have access to the command line used to start the daemon, because one of the key features is automatic restart. If your daemon dies without you asking for it, s6-supervise will automatically restart it. (After a small delay, to prevent busylooping if the failure remains.) And it obviously needs to run the exact same command line as you gave it in the first place. So, in s6, it reads that command line in a file, the "run script", which is considered part of the service configuration. You don't send a command line to s6, you tell s6 to start a service configured in $dir, and it will simply fork and exec $dir/run. It will also autorun $dir/finish when $dir/run dies, for cleaning purposes.
You could probably write $dir/run on demand, when OpenRC tries to start a service, but that is rather ugly: the run file is really an integral part of the daemon configuration, it should not be considered mutable.

steveL wrote:

So can I make sure I understand your posts: s6 does supervision and monitoring of processes, but not dependency-management? Thus openrc could, once it's decided what needs to run and in what order, notify s6 about that, and leave it to supervise/monitor. And it'd then make calls back into openrc, eg when a service goes down, which simply runs commands to start/stop/reinit etc w/e is required, and then disappears like it always has. And those commands are calls back into s6, to tell it what needs to happen, and wait or not as required.

Yes, that is exactly the way I see it and think it should be done.
_________________
Laurent
Back to top
View user's profile Send private message
skarnet
n00b
n00b


Joined: 01 Jul 2014
Posts: 18
Location: Paris, France

PostPosted: Wed Jul 09, 2014 7:42 pm    Post subject: Reply with quote

mv wrote:
You mean that (even after all the scan/ directory is prepared with the schedule script) a call like
Code:
s6-svc -o scan/X

immediately followed by
Code:
s6-svwait -d scan/X

might fail, because X is not yet "up enough" to signal that it is running? This would indeed be a problem.

Yes, that's what I mean. Fortunately, it's easily solved: replace "s6-svc -o scan/X" with "s6-ftrig-listen1 scan/X/event u s6-svc -o scan/X", which will not return until X is up. I should integrate this functionality into s6-svc.
_________________
Laurent
Back to top
View user's profile Send private message
depontius
Advocate
Advocate


Joined: 05 May 2004
Posts: 3231

PostPosted: Wed Jul 09, 2014 7:49 pm    Post subject: Reply with quote

Just for a small piece of clarification...

One of those "exciting new features" of systemd is socket activation. I'm not 100% sure what that means, but I think it's somewhere between what inted/xinted do and an implementation of "dynamic runtime dependency" so that if daemon B needs daemon A running, it won't try to start until daemon A is actually up. I may be mistaken on this. I know that OpenRC has a parallel-start option for faster booting, but I've never used it. I consider boot time to be a coffee (or other such time-waster) opportunity. But I can see the desirability for fast booting in some usages.

Is my impression of socket activation wrong, or can someone give me a better description? Does s6 offer the opportunity to "fill this void"? I recognize that it may not really be a void, and that for me it doesn't matter, but sometimes perception is everything.
_________________
.sigs waste space and bandwidth
Back to top
View user's profile Send private message
mackal
Tux's lil' helper
Tux's lil' helper


Joined: 04 Aug 2011
Posts: 78

PostPosted: Wed Jul 09, 2014 8:17 pm    Post subject: Reply with quote

depontius wrote:
One of those "exciting new features" of systemd is socket activation.


It is an inetd/xinetd replacement. Personally I like the idea of the socket activation being nicely integrated with systemd and having similar unit files.

xinetd also appears to be pretty dead, official site is long gone, github repo was last active 2 years ago :/ but I might also need to look closer...
Back to top
View user's profile Send private message
skarnet
n00b
n00b


Joined: 01 Jul 2014
Posts: 18
Location: Paris, France

PostPosted: Wed Jul 09, 2014 10:06 pm    Post subject: Reply with quote

One of Lennart's best talents is to take 20-year old ideas that haven't caught the mainstream world because they weren't properly advertised; to mix them up in a way that forsakes all concept of modularity; and to present the result as the most revolutionary and brilliant thing ever. If he was half as good an engineer as he is a communicator, the world would be a much better place.

"Socket activation" is a messy mixture of three different things:

1. Superservers.
inetd, xinetd, tcpserver, ipsvd, s6-tcpserver, and certainly a lot more that I don't know about. Superservers just listen to a socket and spawn a process for every connection. It's a simple, good model that works well for a lot of services (and not so well for others, but those are far less common than people think). When the superserver is up, the service is up. Clients can connect to the socket and get a server answering them. systemd didn't invent this; inetd did.

2. Programs communicating via open descriptors, not opening their descriptors themselves.
This is a healthy pattern, and what the Unix file descriptor concept was designed for in the first place. A program that reads from stdin and writes to stdout can be used in any environment and adapts to any framework. It can be launched by inetd or any other INET superserver. It can be launched by s6-ipcserver or any other UNIX superserver. It can be run as part of a pipeline.
It is a good idea to write programs that use pre-opened file descriptors whenever possible, to maximize their flexibility. systemd didn't invent this; Unix did.

3. fd-holding.
Keeping a file descriptor open in another process than the one using it, in order to be able to restart the process if it dies and feed it the same descriptor, is a good pattern when properly used. It is an integral part of supervision suites. The earliest use of the technique (that I know of) is daemontools' svscan, in 1998. systemd didn't invent this; DJB did, or maybe some Unix guru before him.

"Socket activation" means that systemd behaves as a central superserver that pre-opens all the sockets in the system and holds them, and then feeds them to daemons written to communicate via open descriptors. This doesn't sound like a bad idea, and actually has some value, except that:
- It requires a central registry of all the sockets you need, for all your services. Like a Microsoft Windows registry. It's a database of everything that's running on your system; this defeats the essence of modularity.
- systemd claims this makes the boot time faster: since all the sockets are already created, every service can connect to every other one and start at the same time. But this is not true, or true to a very little extent: if service A depends on service B and they start in parallel, A will only be able to run as long as it writes to B (and the buffer is not full); as soon as it needs to read from B, it will hang until B is up and capable of writing to A. Parallel start-up of services via pre-opened sockets is only able to perform the first writes in parallel; contrary to what Lennart says, this is not where you gain the most booting time. You gain a little time, for a major decrease in reliability.
- As with everything systemd, this makes systemd itself a major SPOF, and hurts reliability and robustness. Starting services before the logging infrastructure is operational is not a good idea. The logging infrastructure is the first thing you want operational, and you want it failproof, and you do not want to start anything until it is up: if something goes wrong, and you have no logs, good luck debugging. And starting A before making sure that B is working is a recipe for failure: what happens if B never starts ? Will A block forever on a read ? What if it is in the middle of a transaction ?

"Socket activation" as systemd does it is not good design. However, there is value is performing fd-holding for services that want it; it just needs to be a separate service, independent from supervision - running on top of process management, not as a part of it, and dynamically configurable without a registry. I have a fd-holding daemon in my projects, but it will be a while before I can get to it.

So no, for now, s6 does not fill this void. But the void is not as big as systemd pretends it is.
- Superservers and flexible daemons already exist.
- s6 performs fd-holding between a service and its logger. Logging infrastructure is the first thing that is up and stable in an s6 installation, and logs are never lost.
- We do not have any fd-holding for network services for now. This will require some thinking and design, in order to provide that functionality without being as intrusive as systemd is.
- The speed aspect is not a concern. You do not accelerate boot times, at least not significantly, by pretending services are up when they are not, starting everything, and letting God sort things out. You accelerate boot times in two ways:
1. By eliminating times where the processor is doing nothing.
It's all about not serializing waiting times. Most init systems achieve this pretty well nowadays. systemd does it well. s6 does it well. Only sysvrc, where you start everything serially, including services that poll the network, still performs badly here.
2. By eliminating times where the processor is doing unnecessary work.
That means reducing the code path between the moment the kernel boots and the moment the services are up and running. Simplifying the code, reducing the number of system calls, always using the lowest-level, closest-possible-to-the-machine tool. That is where systemd loses, badly: most of what systemd saves by aggressive parallelization, it loses in complexity. Try strace -f on a systemd boot, if there even is a way to make it work; then try strace -f on a s6 boot. s6 is extremely frugal in its code path, as the size of the source code indicates.

Let's just be as fast as systemd by doing less work and staying out of the way of applications.
_________________
Laurent
Back to top
View user's profile Send private message
baaann
Guru
Guru


Joined: 23 Jan 2006
Posts: 529
Location: uk

PostPosted: Thu Jul 10, 2014 12:23 am    Post subject: Reply with quote

Quote:
"Socket activation" as systemd does it is not good design. However, there is value is performing fd-holding for services that want it; it just needs to be a separate service, independent from supervision - running on top of process management, not as a part of it, and dynamically configurable without a registry. I have a fd-holding daemon in my projects, but it will be a while before I can get to it.

So no, for now, s6 does not fill this void. But the void is not as big as systemd pretends it is.


Martin Gräßlin has stated here

[url]https://plus.google.com/+MartinGräßlin/posts/GMtZrNCeaLD[/url]

that
Quote:
Otherwise I just think it makes sense to use what is available. Please note that I'm not looking for systemd because of random political reason, but because of features not provided by anything else. E.g. socket activation and proper integration of cgroups. Both are things I want to use for Plasma 2 on Wayland. It might be possible to live without it, but in that case it might be that important features are missing (I do hope that we can use socket activation to survive crashes of the session compositor) or that security is not as optimal as it should be. No other system provides such features.


So it looks as if these features are available KDE/Plasma2 will continue to be fully useable in the future?

BTW skarnet
Very interesting and well described
Back to top
View user's profile Send private message
depontius
Advocate
Advocate


Joined: 05 May 2004
Posts: 3231

PostPosted: Thu Jul 10, 2014 1:02 am    Post subject: Reply with quote

After the last response by skarnet I'm a bit more confused about how systemd is selling its socket activation as such a super kewl new feature. I'm more worried that they've mad the interface some twisted non-orthogonal sort of thing that will force unnecessary inclusion of systemd. In essence, though socket activation can be done without systemd, I'm worried that they've done their interface in such a way that it can't be used without systemd.

On a similar ground, it looks as if some sort of **-logind" will at some point be necessary in order to live without systemd. More stuff seems to be setting that as a requirement, and it sounds as if Weston/Wayland may be headed that way. Maybe the easiest way at some point will be a "dummy-systemd" just to make the surrounding software happy.

Why?

I still believe there is a backlash coming. Pulseaudio had one heck of a backlash - for a while. But nobody had packaged dmix in such a way as to "easily for the naive user" solve the problems Pulse solved. So eventually others took over maintenance and made Pulse work well enough.

An alternative needs to be on-deck when the sytemd backlash happens.
_________________
.sigs waste space and bandwidth
Back to top
View user's profile Send private message
mv
Watchman
Watchman


Joined: 20 Apr 2005
Posts: 5673

PostPosted: Thu Jul 10, 2014 9:06 am    Post subject: Reply with quote

Some comments.

First of all, I consider s6 a great idea: Using an appropriate suite to control processes is the "right" thing to do. In fact, PID-files have always been a horrible hack, and work at most "by chance" (typical example: A process dies, some other process is started and gets by accident the same number - immediately your PID file becomes invalid, and you do not even recognize it).
This is a point where init-systems really needed an improvement, and s6 is able to be such an improvement.

On the other hand, it appears to me that the interface of s6 would still need a lot of polishing, most notably:
steveL wrote:
It start to sound too tangled to me; ideally we'd like s6-* to start them based on a similar command-line to ssd (not exactly the same, but conceptually.)

This is the same reason for which I wanted to write a "schedule" script frontend for s6; instead of forcing the user to write such customn frontends, it might be better to provide commands in s6 to this. Here is an example: Assume that the "frontend" script is called s6-schedule, it might be called as
Code:
s6-schedule [options] control-directory-name command-line-of-task-to-start

and its only task is to setup the control-directory to run command-line-of-task-to-start (possibly shutting down the previos command if the directory was already setup and "running"). Options might control UID changes etc (everything which is normally setup in the control-directory). Additional options might allow to "run" the task immediately (so that no extra command needs to be given), a further option might force to wait until the task is properly running: The latter not only to avoid the race problem you mentioned, but also to make sure if e.g. openrc uses this command that the dependencies are resolved in the correct order (which can mean that one must be sure that the daemon is properly initialized and running). I understand that all this can be done by starting several s6 commands in sequence, but at least for the most common cases, it should be possible to do all this with a single command and a few optoins.

Only with such a convenient frontend, it is reasonable for e.g. openrc to use s6: Otherwise, the startup script of every single daemon would need to contain code to setup the s6 control-directories manually, including all the "glue" which was mentioned before. In the ideal case, the s6 consumer (openrc in this example) need never access the control directories directly but can control/read everything in them by frontend commands.

Of course, every consumer (like openrc) might provide its own s6 frontend script, but why not provide it in s6 itself?
Back to top
View user's profile Send private message
skarnet
n00b
n00b


Joined: 01 Jul 2014
Posts: 18
Location: Paris, France

PostPosted: Thu Jul 10, 2014 9:52 am    Post subject: Reply with quote

There are several reasons why I'm not too keen on this idea.

- As often, there's more than meets the eye and details are important. In this case, service configuration is more than just a command line: it's a command line to start the service, a command line to cleanup, a command line to log the output of the service (s6 makes your logging flexible), environment variables, etc. Just sending a command line is not enough.
An OpenRC start() script has to work to fetch configuration and build the appropriate command line to feed to start-stop-daemon. Same with a sysvrc script. Still, both frameworks treat the start command line in a special way, because that's what they fork; but I challenge this model. The start command line is an integral part of the service configuration, and should be treated as such. With s6, your service configuration, including various command lines, is gathered in a single directory, and that's it.

- Service configuration is mostly static. How to start a service only depends on the service and the distribution, not on the machine - so, this should be part of the service package. What is dynamic is global configuration environment, such as IP addresses of network interfaces, and you can find those in files: $dir/run should fetch its dynamic information from the filesystem, but the $dir/run script itself does not need to change or be built every time you start the service. Dynamically rebuilding static information is unnecessary and makes a system run less reliably and slower.

So I am arguing that s6 has the better service configuration model, and that it is up to OpenRC to adapt to it instead of the other way around. I can be convinced otherwise, but historical ("we've always done it that way and we like it") or political ("that's our policy and we're not changing it") arguments won't do - it will take extremely solid technical arguments.
_________________
Laurent
Back to top
View user's profile Send private message
mv
Watchman
Watchman


Joined: 20 Apr 2005
Posts: 5673

PostPosted: Thu Jul 10, 2014 11:34 am    Post subject: Reply with quote

skarnet wrote:
it's a command line to start the service, a command line to cleanup, a command line to log the output of the service (s6 makes your logging flexible), environment variables, etc.

It is very good that all this can be changed - optionally! But in 95% of the use cases the same logging is desired, and in 99% of the use cases it is enough to choose 1 out of 2 or at most 3 logging policies. Then it is sufficient to choose this policy by some flag, and the frontend sets the corresponding data. The same holds fpr the cleanup line. For those very rare exceptions where the frontend is not sufficient, one has to use the "backend", of course.
Quote:
With s6, your service configuration, including various command lines, is gathered in a single directory, and that's it.

I completely agree with this. I just say that having a single "frontend" command which generates that directory with appropriate entries for the most common cases (and optionally additionally calls some further s6 commands to "start" this directory appropriately) is more convenient to use than forcing the user to do this initialization manually.
Take a simple example: You need to generate a command line whose arguments need to be calculated by complicated rules and which might contain special symbols (spaces, but also " ' \ etc); yes, there are scripts like push.sh which do the correct quoting so that one can generate a correspoding file, but the consumer would have to program all this manually for every single service.
Quote:
Service configuration is mostly static.

Very often, it is not: It is one of the advantages of openrc that if you want to control the details you do this not by modifying the main command in /etc/init.d but (usually) by modifying only /etc/conf.d, and the latter are arbitrary shell code: In particular, the customer of openrc can provide dynamic checks where they appears appropriate to him without modifying the original init-files. In a forcibly static model, all possible checks must be coded in $dir/run itself.
Once, more: I do not object that this is optionally possible, but a convenient way of generating $dir dynamically should be provided as well. One does not contradict the other.
Quote:
Dynamically rebuilding static information is unnecessary and makes a system run less reliably and slower.

But easier to configure for the user.
And also for the provider of the service: Probably almost every service must have attached the same files for logging, finishing the command, etc. This is a lot of redundant information to be written and updated (e.g. if openrc or its user wants to change the default logging policy globally he might need to modify every single service), and the job of computers is to simplify routine tasks... I understand that the user can use sed and friends to modify the always same configuration files, but already with the next update of openrc, he will have the same mess...
Back to top
View user's profile Send private message
steveL
Advocate
Advocate


Joined: 13 Sep 2006
Posts: 4776
Location: The Peanut Gallery

PostPosted: Thu Jul 10, 2014 10:12 pm    Post subject: Reply with quote

skarnet wrote:
(About network services.)

Let's just agree "They are services, to be handled like any other service by the supervision suite / dependency management," which I missed the first time around. My bad.
Quote:
steveL wrote:
I don't agree: afaic the xinetd-replacement needs to tie into other events, and is part of the overall "supervision suite" since it starts and stops less frequently-used services, and must report back if something goes wrong with something it monitors.

I think I see what you mean; we just attach a different meaning to "xinetd-replacement".
I'm not a xinetd user, and am not aware of all its functionality. The model I use is very simple: one process per port, with s6-tcpserver4 (or 6). Enabling or disabling a network service is exactly the same as enabling or disabling any other service, since there's one daemon per network service.

Yeah but the process isn't forked til it's needed; in the meantime the superserver still has the accepting socket fd, should it need to restart or can just pass it off via inheritance and close it in the main, depending on config.

The former case is more interesting in this context, and ties into what you're talking about with fd-holding.

It's not just about one connection and then reforking for the next client; it's more about delaying startup til it's actually needed, for non-core services.
Quote:
Can you give a concrete example of an event that would require integration of a superserver with a dependency manager?

The network has gone down. Depending on the configuration (ie the runscript depends), this might or might not require something to happen with a service. Or another service used by this one has gone down; the full set of events need to propagate in both directions, and given that we're really talking here about the addition of event notification and monitoring to openrc, I'd like "xinetd" to be integrated with that (and to be able to run several instances). It also makes a lot of sense to tie in with dhcpcd from where I'm sitting. Again keeping it modular, since it's doing a good job already.
Quote:
The thing is that the supervision tree must always have access to the command line used to start the daemon, because one of the key features is automatic restart. If your daemon dies without you asking for it, s6-supervise will automatically restart it. (After a small delay, to prevent busylooping if the failure remains.) And it obviously needs to run the exact same command line as you gave it in the first place. So, in s6, it reads that command line in a file, the "run script", which is considered part of the service configuration.

Yeah we just want to use /etc/init.d/foo restart instead; which effectively does the same thing, and (hypothetically) runs an s6 command (or some new monitoring command) instead of start-stop-daemon.

Note that start-stop-daemon in openrc is already a custom rc util, not the debian ssd. So we can equally well modify that to call s6 if that makes more sense.
Quote:
steveL wrote:

So can I make sure I understand your posts: s6 does supervision and monitoring of processes, but not dependency-management? Thus openrc could, once it's decided what needs to run and in what order, notify s6 about that, and leave it to supervise/monitor. And it'd then make calls back into openrc, eg when a service goes down, which simply runs commands to start/stop/reinit etc w/e is required, and then disappears like it always has. And those commands are calls back into s6, to tell it what needs to happen, and wait or not as required.

Yes, that is exactly the way I see it and think it should be done.

Cool me too; we just don't need anything more added to openrc than the event monitoring and supervision; to restart etc, s6 would then call back to the extant runscript. You say s6 doesn't do dependency management, but that's exactly what's needed wrt eg a dependency going down as above, so I think we'd need to extend the openrc codebase to deal with what actions to take in those instances (like a specific net interface going down, as notified from dhcpcd, in the same way that it handles wpa-supplicant.)

I don't agree with what you said to mv; you're effectively talking about going back to a much less capable mechanism than we already have when it comes to dependency management and configuration.
Back to top
View user's profile Send private message
mv
Watchman
Watchman


Joined: 20 Apr 2005
Posts: 5673

PostPosted: Sun Jul 13, 2014 2:25 pm    Post subject: Reply with quote

Just for the records: Since there is now a rather simple job scheduler in perl using TCP sockets (see the recent posts in the thread mentioned earlier), I do no longer plan ot (mis)use s6 as a job scheduler. Nevertheless, I would consider an automatic script to generate the baselayout of s6 directories "on the fly" rather convenient. But this has to be decided by people writing an init system based on s6...
Back to top
View user's profile Send private message
steveL
Advocate
Advocate


Joined: 13 Sep 2006
Posts: 4776
Location: The Peanut Gallery

PostPosted: Mon Jul 14, 2014 12:15 pm    Post subject: Reply with quote

mv wrote:
Just for the records: Since there is now a rather simple job scheduler in perl using TCP sockets (see the recent posts in the thread mentioned earlier), I do no longer plan ot (mis)use s6 as a job scheduler. Nevertheless, I would consider an automatic script to generate the baselayout of s6 directories "on the fly" rather convenient. But this has to be decided by people writing an init system based on s6...

Hmm well I've read that thread and what I've read there sounds different to what you wrote above:
Quote:
I want to "start" various jobs (for different users in different shells, maybe even in different chroots), but their execution should actually be postponed until some sort of "signal" is sent to these tasks. When they are finished they should report back their exit status.

In the other thread you discuss 'a program which can "schedule tasks", but not in the sense of cron ("at a given time") but in the sense of sequences/conditionals/...' and I have to say that, and the example, sounds a lot like make and an application thereof, perhaps over several makefiles.

I'm not saying it can do all of it, but it can call out to whatever you like based on a sequence of dependencies, and parallelise whatever it can. So I'd build the whole thing around make.

WRT chrooting, I've done similar scripting Gentoo installs; you need a script outside and inside the chroot, but you can make that generic easily enough. I've done it in bash, and you have perl as script-language of choice which allows even more capability (like types and reflection.)

Don't forget that with make you effectively have as many layers of evaluation as you like; you already have a layer on top of sh, which can further use awk or anything you want, and call back into (or generate) make. gmake internalises that, though it's a terrible way to write lisp, based on experience. pattern-deps are nice though. ##workingset ;)
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Gentoo Chat All times are GMT
Goto page Previous  1, 2, 3, 4  Next
Page 2 of 4

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum