Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
systemd bad for dev and gentoo?
View unanswered posts
View posts from last 24 hours

Goto page Previous  1, 2, 3, 4  Next  
Reply to topic    Gentoo Forums Forum Index Gentoo Chat
View previous topic :: View next topic  
Author Message
skarnet
n00b
n00b


Joined: 01 Jul 2014
Posts: 25
Location: Paris, France

PostPosted: Mon Jul 14, 2014 6:24 pm    Post subject: Reply with quote

steveL wrote:
Yeah but the process isn't forked til it's needed; in the meantime the superserver still has the accepting socket fd, should it need to restart or can just pass it off via inheritance and close it in the main, depending on config.

The former case is more interesting in this context, and ties into what you're talking about with fd-holding.

It's not just about one connection and then reforking for the next client; it's more about delaying startup til it's actually needed, for non-core services.

But forking the process or not does not change a single thing. If anything, delaying the startup is less efficient, because you have to initialize the super-server in a lazy way at the first connection, which can add a bit of latency. There is no good reason to do that.

Reducing the number of processes on a system is a very misguided goal. Processes are not a scarce resource, and sleeping processes do not hurt the system at all; the simplicity of a multiprocess design generally overcomes the amount of resources used by those processes - this amount is often overrated because it's hard to evaluate precisely. A better goal than reducing the number of sleeping long-lived processes is to reduce the amount of private dirty memory used by those processes while they're sleeping, because that is what the real cost is.

And a superserver can be very light. s6-tcpserver uses 16k of private dirty (i.e. non sharable) data, and no CPU at all while waiting for connections. That means, I pay 16k (24k if you count kernel memory) for every TCP port I want to listen to, and that's it - no need to bother with all the complicated stuff that systemd does. I'm willing to bet that the logic used by systemd to get the configuration for when and how to start a superserver on what socket uses up more resources than a few s6-tcpserver processes, and that it lengthens the total code path. To begin with, just the configuration file parser could take more than that. What really counts is the code path between "kernel boots" and "machine is serving", and doing complex work makes it longer than necessary.

It makes even less sense to use the socket activation approach on non-core services. Grabbing the socket early means that you can be accepting connections faster, but what good is it if you can't serve just yet because you still need to fork and initialize your superserver, and the core services still aren't totally up and you need to wait for them before you can do anything meaningful ? Accepting connections faster just means you'll be using up resources that would be better employed finishing the initialization of your core services first.

The right approach is to initialize the core services, among which a fd-holding daemon, then the non-core services, among which the network superservers. The superservers grab a socket then copy it into the fd-holding daemon. This yields the smallest amount of source code, the shortest code path before the core services are running, and the shortest code path before the non-core services can serve a full request. This does not yield the shortest code path before the non-core services can accept a connection, but that metric is meaningless.

steveL wrote:
The network has gone down. Depending on the configuration (ie the runscript depends), this might or might not require something to happen with a service. Or another service used by this one has gone down; the full set of events need to propagate in both directions, and given that we're really talking here about the addition of event notification and monitoring to openrc, I'd like "xinetd" to be integrated with that (and to be able to run several instances). It also makes a lot of sense to tie in with dhcpcd from where I'm sitting. Again keeping it modular, since it's doing a good job already.

Yes, this is all about using the event-consuming and event-producing APIs. No need to "integrate" the superservers, or network servers: there is just a dependency tree, with the network interface at the bottom and the highest-level services at the top, and once your dependency management system has a way to get notified of any event and to create events, then it's a standard use case.

Note that it's generally a bad policy to kill A if A depends on B and B has gone down. Generally, A will notice that B has disappeared, and will either die on its own (and be restarted by the supervisor when B is back up), or enter a degraded state until B is running again. Service dependencies should only be used to start up or shut down a whole service tree in the correct order; when you're in the state where everything is running, let the supervision suite handle random deaths, that's what it's for. It will auto-restart services that die, and dependent services will automatically go down and back up as necessary. Having the dependency management system mess with process states outside of administrator action is a recipe for spectacular failures; do not weaken reliability by adding control where it is not needed.

steveL wrote:
I don't agree with what you said to mv; you're effectively talking about going back to a much less capable mechanism than we already have when it comes to dependency management and configuration.

Because I'm much less focused on features than on correctness of the design, reliability of the init procedure, and general efficiency - and I've found that the best way to design service configuration was to have a static "./run" script among other things as part of the configuration, ideally provided in the same package as the daemon, and dynamic information provided via other ways - files sourced from the "./run" script, for instance.

I'm not opposed to providing a command-line frontend that would create a whole service directory, including the ./run file, when given a few options such as the logging directory and the daemon command line. However, using that frontend, and thus creating service directories, dynamically as part of a normal OpenRC startup routine, would significantly lessen the reliability and security of the system (you're now executing dynamically generated code, instead of instanced templates, as root) and increase the complexity of service startup - i.e. make it less maintainable, and slower. If it's the only way for supervision suites to make it into a mainstream distribution, then so be it, but I would consider it a first step, with another step (make OpenRC use more static service directories instead of generating them on-the-fly) to be performed later.
_________________
Laurent
Back to top
View user's profile Send private message
mv
Watchman
Watchman


Joined: 20 Apr 2005
Posts: 6747

PostPosted: Mon Jul 14, 2014 6:59 pm    Post subject: Reply with quote

Maybe, we should discuss this in the other thread.
steveL wrote:
and I have to say that, and the example, sounds a lot like make and an application thereof, perhaps over several makefiles.

Not really, since "make" has to know in advance which files are generated and uses these as "magic flags" for the dependencies - you would need a lot of "glue" to possibly generate artificially such files, but the "script" finally to be written should be very simple. Moreover, once "make" is started, you cannot just "on the fly" edit it without stopping the jobs you started (unless you were very careful concerning the generation of "magic flag" files). It can partially be used for such purpose, but it would be a mess.

Anyway, the problem is solved now with the "schedule" script. Here is an example how I typically use it:

1. I start in a chroot "schedule start emerge -NDu @world" (starts immediately so that I do not loose any time when doing the next steps...)
2. I start on my main partitition "schedule queue emerge -NDu @world" (my system's memory is low, so normally I avoid running both simultaneously).
3. Similarly, I queue a script to compile and install the kernel (which makes only sense after the emerge -NDu @world).
4. The same as 3. for the chroot.
5. In the chroot, I need special kernel drivers: So I run there "schedule queue emerge -1O @module-rebuild".
6. In another window, I run "schedule queue shutdown -h now"

Then the actual "scheduling" is a simple ad-hoc written command line: This is crucial for me that I do not need cumbersome "glue" in here.
Probably "schedule exec" would just be enough in this example, but just to be sure, some tasks may be exuted ecen if some others failed.
Code:
( schedule exec 2 3; schedule exec 1 4 5; schedule exec )

Now the important point is that it I suddenly realize after I sent the above command that I also want to do some lengthy mathematical calculation before shutdown, I can just queue it in position 6. Then I press just Ctrl-C in the window where I executed the above script and then rerun it again: This will stop none of the jobs already executing but just insert this new job before the machine shuts down...
Back to top
View user's profile Send private message
steveL
Watchman
Watchman


Joined: 13 Sep 2006
Posts: 5153
Location: The Peanut Gallery

PostPosted: Wed Jul 16, 2014 1:04 pm    Post subject: Reply with quote

skarnet wrote:
steveL wrote:
Yeah but the process isn't forked til it's needed; in the meantime the superserver still has the accepting socket fd, should it need to restart or can just pass it off via inheritance and close it in the main, depending on config.

The former case is more interesting in this context, and ties into what you're talking about with fd-holding.

It's not just about one connection and then reforking for the next client; it's more about delaying startup til it's actually needed, for non-core services.

But forking the process or not does not change a single thing. If anything, delaying the startup is less efficient, because you have to initialize the super-server in a lazy way at the first connection, which can add a bit of latency. There is no good reason to do that.

Huh? No, it's the superserver which is forking the server process, so the superserver is already started. Yes it may add a small delay from initial accept to response, but that's an admin decision, not ours. Typically they're not used for services over the WAN, so it's not an issue, but again, it's not our problem to make that call. We provide mechanism, and configuration, not decisions, apart possibly from defaults, which are up to QA (aka bug-wranglers) to change. Again, not our decision.

This is in the context of:
skarnet wrote:
I'm not a xinetd user, and am not aware of all its functionality. The model I use is very simple: one process per port, with s6-tcpserver4 (or 6). Enabling or disabling a network service is exactly the same as enabling or disabling any other service, since there's one daemon per network service.
My point is: that's not a super-server. If we're providing an xinetd submodule, then it should be capable of doing what xinetd does.

skarnet wrote:
Reducing the number of processes on a system is a very misguided goal.
That's true; it's not a goal of this at all though.

In fact we rely on a cheap multi-processing environment, which is a hallmark of Unix-inspired OS; Linux exemplifies this to the extent that threading was implemented using the existing process model, and simply labelling that a "scheduling entity" instead.
Quote:
And a superserver can be very light. .. What really counts is the code path between "kernel boots" and "machine is serving", and doing complex work makes it longer than necessary.

I'm not sure why you discuss a "superserver" for core services (in contrast with non-core below) since it's already been pointed out that this is for non-core services. What you appear to be discussing is the monitoring component; or that's still to be discussed.
Quote:
It makes even less sense to use the socket activation approach on non-core services. Grabbing the socket early means that you can be accepting connections faster, but what good is it if you can't serve just yet because you still need to fork and initialize your superserver, and the core services still aren't totally up and you need to wait for them before you can do anything meaningful ?

Again, the superserver has to be up by definition; it's what forks and execs the non-core service. Core services are usually up beforehand, though again being able to use more than one instance (which openrc gives us with a simple symlink) means the admin can setup dependencies how they like.
Quote:
Accepting connections faster just means you'll be using up resources that would be better employed finishing the initialization of your core services first.

Another goal that is irrelevant to the use-case.
Quote:
The right approach is to initialize the core services, among which a fd-holding daemon, then the non-core services, among which the network superservers. The superservers grab a socket then copy it into the fd-holding daemon. This yields the smallest amount of source code, the shortest code path before the core services are running, and the shortest code path before the non-core services can serve a full request. This does not yield the shortest code path before the non-core services can accept a connection, but that metric is meaningless.

So is the other one for non-core services. The "fd-holding daemon" would be the xinetd submodule. What you're calling superservers aren't, afaic; though there does need to be a control/monitoring process (if not a pair in an event-based setup.)
Quote:
steveL wrote:
The network has gone down. Depending on the configuration (ie the runscript depends), this might or might not require something to happen with a service. Or another service used by this one has gone down; the full set of events need to propagate in both directions, and given that we're really talking here about the addition of event notification and monitoring to openrc, I'd like "xinetd" to be integrated with that (and to be able to run several instances). It also makes a lot of sense to tie in with dhcpcd from where I'm sitting. Again keeping it modular, since it's doing a good job already.

Yes, this is all about using the event-consuming and event-producing APIs. No need to "integrate" the superservers, or network servers: there is just a dependency tree, with the network interface at the bottom and the highest-level services at the top, and once your dependency management system has a way to get notified of any event and to create events, then it's a standard use case.

Yes, though we haven't heard anything about events in s6 so far.

Quote:
Note that it's generally a bad policy to kill A if A depends on B and B has gone down. Generally, A will notice that B has disappeared, and will either die on its own (and be restarted by the supervisor when B is back up), or enter a degraded state until B is running again. Service dependencies should only be used to start up or shut down a whole service tree in the correct order; when you're in the state where everything is running, let the supervision suite handle random deaths, that's what it's for. It will auto-restart services that die, and dependent services will automatically go down and back up as necessary. Having the dependency management system mess with process states outside of administrator action is a recipe for spectacular failures; do not weaken reliability by adding control where it is not needed.

It's just mechanism, not policy. If a service dies and we know about it (since we're monitoring it and events related to it), then there are cases where the admin wants dependants to be restarted when it comes back up (because it matters, which is why they've told us to monitor it.) I don't care what they are particularly, since that's down to the configuration and the runscript, each of which are as simple as they can be, and as flexible as they need to be.
Quote:
steveL wrote:
I don't agree with what you said to mv; you're effectively talking about going back to a much less capable mechanism than we already have when it comes to dependency management and configuration.

Because I'm much less focused on features than on correctness of the design, reliability of the init procedure, and general efficiency - and I've found that the best way to design service configuration was to have a static "./run" script among other things as part of the configuration, ideally provided in the same package as the daemon, and dynamic information provided via other ways - files sourced from the "./run" script, for instance.

Yes that's fine, but you've gone down a path which is already better handled in openrc, so it doesn't offer us anything.

Openrc started from the same premises of correctness, reliability and efficiency. It just happens to provide a superior interface for setup and configuration. The beauty of runscripts is that they can be, and often are, purely declarative in the way you talk about. Yet they can equally be as dynamic as the service or admin requires.

Obviously it doesn't handle supervision, which is why we keep talking about integrating something like runit, s6 or monit, as well as general events.
Quote:
I'm not opposed to providing a command-line frontend that would create a whole service directory, including the ./run file, when given a few options such as the logging directory and the daemon command line. However, using that frontend, and thus creating service directories, dynamically as part of a normal OpenRC startup routine, would significantly lessen the reliability and security of the system (you're now executing dynamically generated code, instead of instanced templates, as root) and increase the complexity of service startup - i.e. make it less maintainable, and slower.

That's not an inherent limitation: openrc already proves the opposite.

There's nothing inherently wrong with a directory per-service; daemontools /srv, and indeed (/var)/run use the same model.
Quote:
If it's the only way for supervision suites to make it into a mainstream distribution, then so be it, but I would consider it a first step, with another step (make OpenRC use more static service directories instead of generating them on-the-fly) to be performed later.

Openrc doesn't generate service directories; that's only in play here because of your conceptual framework.

From what I've heard so far, it'd be better for us just to implement the monitors (which are essentially trivial) and event daemon ourselves. There's no way we're jumping through all the hoops you keep graciously presenting us with, imo; it'd be a net loss for us, and would only distract from the areas that do need work. Though we may end up using parts of your codebase, if that's ok ;)

Though I'm not a developer of anything; it's just my opinion. Though I will be patching openrc in a branch somewhere soon.

Someone else may be interested in integration of openrc with s6 along the lines you discuss. I could see that as an optionally-enabled part of it, but again I don't see it as valuable, along the above lines. It feels more like over-complexification by adding layer on top of layer, instead of considering the design through. Still there's a long history of that ;-)
Back to top
View user's profile Send private message
Dr.Willy
Guru
Guru


Joined: 15 Jul 2007
Posts: 547
Location: NRW, Germany

PostPosted: Fri Aug 29, 2014 4:46 pm    Post subject: Reply with quote

Oh boy, I seem to have missed out on an interesting discussion here :)
So forgive my attempt to revive it, because I don't think all misunderstandings had been resolved, before it came to a conclusion.
First of all let me check that I (and all of you) understood the concepts of openrc and s6 correctly:

Starting a Service
OpenRC: For every service Foo there is a runscript-script init.d/Foo that contains a function called start() which launches the service's main executable.
s6: For every service Foo there is a directory svdir/Foo/ that contains a script called run which launches the service's main executable.
The main difference is that the start() function returns, leaving the process to it's own devices.
The svdir/Foo/run script on the other hand does not return while the service is up. The run script effectively _is_ the service.

Stopping a Service
OpenRC: For every service Foo there is a runscript-script init.d/Foo that contains a function called stop() which terminates the service's main process.
s6: Since s6 has direct access to the service's main process, s6 can signal it to shut down without requiring pidfiles or anything thelike.

Configuration
OpenRC: For every init.d/Foo there is a script conf.d/Foo which sets up the configuration ("env") used in the init.d script.
s6: Since svdir/Foo/run is a script it can read arbitrary configuration files.
With regard to mv's comments, there is nothing that prevents a svdir/Foo/config file from taking the place of conf.d/Foo
Or, as djb suggests, use a config directory: http://cr.yp.to/daemontools/envdir.html | http://skarnet.org/software/s6/s6-envdir.html
All the talk about static/dynamic configuration and dynamically seems to be a huge misunderstanding on both sides of the fence.
Dynamically generating a service directory is equivalent to dynamically generating a init.d/Foo runscript. Nobody does or want that.

Conclusion
OpenRC does not supervise services.
OpenRC has to resort to pidfiles to figure out which process actually belongs to which service.
s6 does not provide a dependency mechanism.

So here are my questions for you skarnet:
s6 replacing openrc:
- Do you think services require dependency information and if so, where/how should it be specified?
- Assuming configuration of a service is done via s6-envdir, how would you handle updates?
i.e. Foo-1.1 came with a default envdir, which the user modified. The user now updates from Foo-1.1 to Foo-2.0. What does the user have to do migrate his configuration?
- Is it possible to run multiple instances of the same service i.e. two OpenVPN instances with different configurations?
e.g. if init.d/openvpn.foo is a symlink to init.d/openvpn it uses /etc/openvpn/foo.conf while if you symlink init.d/openvpn.bar to init.d/openvpn it uses /etc/openvpn/bar.conf

openrc on top of s6:
- Does the ./run script use the environment from which it is called?
i.e. init.d/Foo:start() will use the environment set by conf.d/Foo.
If start() calls "s6-svc -u Foo", will the env still be in place for Foo/run?
Back to top
View user's profile Send private message
skarnet
n00b
n00b


Joined: 01 Jul 2014
Posts: 25
Location: Paris, France

PostPosted: Fri Aug 29, 2014 6:45 pm    Post subject: Reply with quote

Dr.Willy wrote:

OpenRC does not supervise services.
OpenRC has to resort to pidfiles to figure out which process actually belongs to which service.

I thought as much, but apparently not everyone agreed. :wink:

Dr.Willy wrote:

s6 does not provide a dependency mechanism.

Correct. Not at the moment, anyway.

Dr.Willy wrote:

- Do you think services require dependency information and if so, where/how should it be specified?

Frankly, I don't know whether it's desirable. The supervision framework makes sure that it is not required: starting everything at the same time will eventually work, since services that do not meet their dependencies fail, die, and get restarted a bit later, so the service DAG will naturally build itself over time. However, this is arguably ugly, because restarting services until they succeed is akin to polling, and polling is bad. This does not matter for small services, but it may be noticeable for heavyweight ones that take non-negligible amounts of resources to load, and that will unnecessary slow down the booting process if they repeatedly attempt to start and fail.

So, conceptually, I think that service dependency management for boot and shutdown time is a good thing; in practice it can reduce the boot time execution code path by a certain amount, and it can avoid unnecessary failures. The tradeoff is that it adds complexity to the boot management process: a little at run time (to actually interpret the DAG in its chosen format), and a lot at administration time - it's easy to add a service to the pool and let the supervisor do its thing, it's a lot more hassle to track down dependencies and write them in a format that's easy for the boot process to parse, especially since dependencies may change across releases. Also, it's a lot more brittle - get one dependency wrong and your whole boot process might be screwed.

So I'm really not sure. Not doing dependency management sounds mediocre and careless, and doing it sounds heavyweight and adds load onto package maintainers; neither option is particularly attractive. It's not a surprise that systemd chose to beg the question: "No dependencies! We open all your sockets for you anyway, so start everything at the same time and it will just work". I can definitely understand the appeal, it sounds like the magical solution, but unfortunately it doesn't work (for reasons I outlined in an earlier post), it makes the system more fragile, and it requires a central registry, which we do not want.

Usually, when I encounter hard questions like these, I put them aside and work on other things. And when I come back to the hard question, circumstances have changed, or I have acquired more experience, or more tools, and the solution is easier to find.

Dr.Willy wrote:

- Assuming configuration of a service is done via s6-envdir, how would you handle updates?
i.e. Foo-1.1 came with a default envdir, which the user modified. The user now updates from Foo-1.1 to Foo-2.0. What does the user have to do migrate his configuration?

It depends on the distribution policy for storing and upgrading user configurations; there's nothing specific to s6 here. If your real question is "how do you atomically update a directory ?", then there's a way to do that with symbolic links. The exact implementation is left as an exercise to the reader; I'll give the solution later :twisted:

Dr.Willy wrote:

- Is it possible to run multiple instances of the same service i.e. two OpenVPN instances with different configurations?
e.g. if init.d/openvpn.foo is a symlink to init.d/openvpn it uses /etc/openvpn/foo.conf while if you symlink init.d/openvpn.bar to init.d/openvpn it uses /etc/openvpn/bar.conf

Not with the same service directory, but nothing prevents you from duplicating the service directory and updating it with your second configuration. s6 only sees service instances, it does not know or care whether those are instances of the same binary or the same "conceptual" service. If you have 12 service types, each having 3 instances with a different config, then you would make 36 service directories. It's okay. Service directories are small and processes are cheap.

Dr.Willy wrote:

- Does the ./run script use the environment from which it is called?
i.e. init.d/Foo:start() will use the environment set by conf.d/Foo.
If start() calls "s6-svc -u Foo", will the env still be in place for Foo/run?

No, it does not, and that's one of the key points of supervision: services are always run with the same environment no matter whether they are launched by boot scripts or manually. This is extremely important for reliability and reproduceability.
Services inherit their environment from the supervision tree, and can define their own environment variables, for instance via s6-envdir, in the run script, before executing into the long-lived process. They never inherit the environment from the caller of s6-svc. Which is why the run script, or more generally the service directory, must be the place where configuration information (or pointers to configuration information) is stored; the boot scripts cannot be used for that.
_________________
Laurent
Back to top
View user's profile Send private message
mv
Watchman
Watchman


Joined: 20 Apr 2005
Posts: 6747

PostPosted: Fri Aug 29, 2014 7:06 pm    Post subject: Reply with quote

Dr.Willy wrote:
All the talk about static/dynamic configuration and dynamically seems to be a huge misunderstanding on both sides of the fence.
Dynamically generating a service directory is equivalent to dynamically generating a init.d/Foo runscript. Nobody does or want that.

I do not think that there is a misunderstanding. I simply do not agree with "Nobody does or want that":
It is not accidental that systemd has a lot of "generator" services which do nothing else but generate .service units into /run for later usage. There are many purposes for this, not only the somewhat exotic "misuse" of s6 as a scheduler. Applications range from reacting to new connected hardware or new net devices over things like dynamically adding new software (e.g. I have here the "squashmount" script in mind which might to be started on dynamically added new directories) up to setting up VMs/chroots on dynamic new locations.
Back to top
View user's profile Send private message
Dr.Willy
Guru
Guru


Joined: 15 Jul 2007
Posts: 547
Location: NRW, Germany

PostPosted: Sat Aug 30, 2014 12:58 am    Post subject: Reply with quote

skarnet wrote:
Dr.Willy wrote:
- Do you think services require dependency information and if so, where/how should it be specified?

[…]
So I'm really not sure. Not doing dependency management sounds mediocre and careless, and doing it sounds heavyweight and adds load onto package maintainers; neither option is particularly attractive. It's not a surprise that systemd chose to beg the question: "No dependencies! We open all your sockets for you anyway, so start everything at the same time and it will just work". I can definitely understand the appeal, it sounds like the magical solution, but unfortunately it doesn't work (for reasons I outlined in an earlier post), it makes the system more fragile, and it requires a central registry, which we do not want.

Usually, when I encounter hard questions like these, I put them aside and work on other things. And when I come back to the hard question, circumstances have changed, or I have acquired more experience, or more tools, and the solution is easier to find.

Well, OpenRC uses explicit dependencies, systemd uses an optimistic approach. Those solutions might be good or bad, but at the end of the day they are solutions.

skarnet wrote:
Dr.Willy wrote:
- Assuming configuration of a service is done via s6-envdir, how would you handle updates?
i.e. Foo-1.1 came with a default envdir, which the user modified. The user now updates from Foo-1.1 to Foo-2.0. What does the user have to do migrate his configuration?

It depends on the distribution policy for storing and upgrading user configurations; there's nothing specific to s6 here. If your real question is "how do you atomically update a directory ?", then there's a way to do that with symbolic links. The exact implementation is left as an exercise to the reader; I'll give the solution later :twisted:

I see how this is supposed to work, I'm actually thinking about "envdirs" in general.
Since you can run diff on directories it won't be such a problem after all.
One problem I still see though is that config files can have comments which explain what a certain variable does, which is quite convenient. Envdir entries would have to store that kind of documentation elsewhere.

skarnet wrote:
Dr.Willy wrote:
- Is it possible to run multiple instances of the same service i.e. two OpenVPN instances with different configurations?
e.g. if init.d/openvpn.foo is a symlink to init.d/openvpn it uses /etc/openvpn/foo.conf while if you symlink init.d/openvpn.bar to init.d/openvpn it uses /etc/openvpn/bar.conf

Not with the same service directory, but nothing prevents you from duplicating the service directory and updating it with your second configuration. s6 only sees service instances, it does not know or care whether those are instances of the same binary or the same "conceptual" service. If you have 12 service types, each having 3 instances with a different config, then you would make 36 service directories. It's okay. Service directories are small and processes are cheap.

That's not the point. The point is that I can have multiple instances of the same service running on slightly different configs.
This is kindof like the argv[0] parameter in C. If you have multiple symlinks to the same executable, the program is the same, but you can have it do different things, depending on the argv[0] parameter.
In short: is the ./run script aware of it's own (service-)name?

skarnet wrote:
Dr.Willy wrote:
- Does the ./run script use the environment from which it is called?
i.e. init.d/Foo:start() will use the environment set by conf.d/Foo.
If start() calls "s6-svc -u Foo", will the env still be in place for Foo/run?

No, it does not, and that's one of the key points of supervision: services are always run with the same environment no matter whether they are launched by boot scripts or manually. This is extremely important for reliability and reproduceability.
Services inherit their environment from the supervision tree, and can define their own environment variables, for instance via s6-envdir, in the run script, before executing into the long-lived process. They never inherit the environment from the caller of s6-svc. Which is why the run script, or more generally the service directory, must be the place where configuration information (or pointers to configuration information) is stored; the boot scripts cannot be used for that.

That makes sense.
Then however, I think the only feasable thing you can do with openrc AND s6 is having openrc do basically everything and use s6 only for "custom services", like gpg-agent and thelike. (as you describe here)
Everything else would require s6 to actually replace openrc.
Speaking of which …
s6 can already be installed from gentoo's portage. What is still missing are the svdirs and … well - s6's ability to actually function like a _complete_ init system.

mv wrote:
Dr.Willy wrote:
All the talk about static/dynamic configuration and dynamically seems to be a huge misunderstanding on both sides of the fence.
Dynamically generating a service directory is equivalent to dynamically generating a init.d/Foo runscript. Nobody does or want that.

I do not think that there is a misunderstanding. I simply do not agree with "Nobody does or want that":
It is not accidental that systemd has a lot of "generator" services which do nothing else but generate .service units into /run for later usage. There are many purposes for this, not only the somewhat exotic "misuse" of s6 as a scheduler. Applications range from reacting to new connected hardware or new net devices over things like dynamically adding new software (e.g. I have here the "squashmount" script in mind which might to be started on dynamically added new directories) up to setting up VMs/chroots on dynamic new locations.

I'm still not seeing it. Could you provide an actual example of why and how this is or would be done in openrc?
Back to top
View user's profile Send private message
mv
Watchman
Watchman


Joined: 20 Apr 2005
Posts: 6747

PostPosted: Sat Aug 30, 2014 5:26 am    Post subject: Reply with quote

Dr.Willy wrote:
Could you provide an actual example of why and how this is or would be done in openrc?

openrc is not supervising services, so there is not much point in doing it. The point is that with s6 you could do more than with openrc and react on dynamic events like from udev.
Partly, the "net" service of openrc (or the more primitive squash_dir predecessor or squashmount) go in this direction, since you "just" have to symlink the corresponding init.d files to enable them for another interface/partition, but this still requires to write into /etc and thus is somewhat static (e.g. you would not want to do this automatically in an udev rule which reacts on an USB Wifi stick, since this would mean that it remains static over the next boot) - all this "net" is a hack in the openrc concept (which is probably the reason why it was replaced by newnet which, however, is only a solution for simple settings and leaves it to users who need e.g. pppoe to imlpement the oldnet manually). The reason is, that openrc is exactly lacking the possibility of dynamic setups.
In systemd, such "possibly generator-aware" stubs like net are the *@.service units (this is why netctl is using them which somehow is a "copy" of oldnet in systemd); this is probably also what you have in mind with your question about argv[0]. But this does not save you from having corresponidng "generators", still: It would be better, if these configs would not need to be necessarily need to be on disk, permanently.
Back to top
View user's profile Send private message
skarnet
n00b
n00b


Joined: 01 Jul 2014
Posts: 25
Location: Paris, France

PostPosted: Sat Aug 30, 2014 5:49 am    Post subject: Reply with quote

Dr.Willy wrote:

One problem I still see though is that config files can have comments which explain what a certain variable does, which is quite convenient. Envdir entries would have to store that kind of documentation elsewhere.

Oh, that's the kind of thing you're thinking about. Well I'm using envdirs because they're quite easy to generate automatically and to parse, it makes code very simple to write - but it's not vital that configuration be stored in envdirs. If it's more practical for distributions to have a file setting environment variables using shell syntax, which allows for comments, then it's not a problem - the run script can be a shell script that sources a configuration file. Or a script in a language of their choice reading configuration in a format of their choice, really - you can write your run script in Javascript or Malbolge if you really want to.
Even better, packages could have a "configuration file compiler" that would parse a file in their chosen format and output an envdir. There are multiple solutions there.

Dr.Willy wrote:
The point is that I can have multiple instances of the same service running on slightly different configs.
This is kindof like the argv[0] parameter in C. If you have multiple symlinks to the same executable, the program is the same, but you can have it do different things, depending on the argv[0] parameter.
In short: is the ./run script aware of it's own (service-)name?

The ./run script is run with its current directory set to the service directory it is in. So, by using relative names, you can use the same run script with different configurations. You still have to duplicate the service directory though.
_________________
Laurent
Back to top
View user's profile Send private message
Dr.Willy
Guru
Guru


Joined: 15 Jul 2007
Posts: 547
Location: NRW, Germany

PostPosted: Sat Aug 30, 2014 11:46 am    Post subject: Reply with quote

mv wrote:
Dr.Willy wrote:
Could you provide an actual example of why and how this is or would be done in openrc?

openrc is not supervising services, so there is not much point in doing it. The point is that with s6 you could do more than with openrc and react on dynamic events like from udev.
Partly, the "net" service of openrc (or the more primitive squash_dir predecessor or squashmount) go in this direction, since you "just" have to symlink the corresponding init.d files to enable them for another interface/partition, but this still requires to write into /etc and thus is somewhat static (e.g. you would not want to do this automatically in an udev rule which reacts on an USB Wifi stick, since this would mean that it remains static over the next boot) - all this "net" is a hack in the openrc concept (which is probably the reason why it was replaced by newnet which, however, is only a solution for simple settings and leaves it to users who need e.g. pppoe to imlpement the oldnet manually). The reason is, that openrc is exactly lacking the possibility of dynamic setups.
In systemd, such "possibly generator-aware" stubs like net are the *@.service units (this is why netctl is using them which somehow is a "copy" of oldnet in systemd); this is probably also what you have in mind with your question about argv[0]. But this does not save you from having corresponidng "generators", still: It would be better, if these configs would not need to be necessarily need to be on disk, permanently.

mv,

you started talking about "service generators" at the beginning of page two and now, halfway through page three I still have no fucking clue what that is supposed to be and what purpose that is supposed to have.
Don't be TomWij; make a point.

Are the Foo@.service units from systemd what you are talking about?

skarnet wrote:
Even better, packages could have a "configuration file compiler" that would parse a file in their chosen format and output an envdir. There are multiple solutions there.

That compiler would have to be able to detect changes in the source file or it would have to be run every time the service is started.
If you keep only the resulting envdir, you return to the question of documentation.
If you keep the source config, you have to make sure the envdir is always up to date.
Keeping the envdir up to date requires you to remember the state of the source config, if you don't want to remember that, you have to run the compiler everytime which nullifies any advantage the envdir might have.

skarnet wrote:
Dr.Willy wrote:
The point is that I can have multiple instances of the same service running on slightly different configs.
This is kindof like the argv[0] parameter in C. If you have multiple symlinks to the same executable, the program is the same, but you can have it do different things, depending on the argv[0] parameter.
In short: is the ./run script aware of it's own (service-)name?

The ./run script is run with its current directory set to the service directory it is in. So, by using relative names, you can use the same run script with different configurations. You still have to duplicate the service directory though.

This does not answer my question.

How do relative names help me, when what I want to do is this:
Code:
~ # cat openvpn@university/run
CONFNAME=${SVNAME#*@}
exec /usr/bin/openvpn --config /etc/openvpn/${CONFNAME}.conf

"openvpn@university" is a symlink to the actual "openvpn@" service directory
SVNAME is "openvpn@university"
Back to top
View user's profile Send private message
skarnet
n00b
n00b


Joined: 01 Jul 2014
Posts: 25
Location: Paris, France

PostPosted: Sat Aug 30, 2014 12:25 pm    Post subject: Reply with quote

Dr.Willy wrote:
How do relative names help me, when what I want to do is this:
Code:
~ # cat openvpn@university/run
CONFNAME=${SVNAME#*@}
exec /usr/bin/openvpn --config /etc/openvpn/${CONFNAME}.conf

"openvpn@university" is a symlink to the actual "openvpn@" service directory
SVNAME is "openvpn@university"


Your SVNAME variable will not be defined in a run script, since run scripts inherit their environment from the supervision tree only. So you have to read the value of SVNAME or CONFNAME somewhere, in the filesystem. Why not in the service directory ? That's what it's there for.
Hardcode "CONFNAME=university" in the openvpn@university/conf file, and your run script would source ./conf .

As I said, you will not be able to use the same service directory for two openvpn instances with different configurations. You need two service directories. You can reuse the run script if you want, but you have to put the configuration somewhere.
_________________
Laurent
Back to top
View user's profile Send private message
Dr.Willy
Guru
Guru


Joined: 15 Jul 2007
Posts: 547
Location: NRW, Germany

PostPosted: Sat Aug 30, 2014 12:42 pm    Post subject: Reply with quote

skarnet wrote:
Your SVNAME variable will not be defined in a run script […]

Ok.

skarnet wrote:
[…]run scripts inherit their environment from the supervision tree only. So you have to read the value of SVNAME or CONFNAME somewhere, in the filesystem. Why not in the service directory ? That's what it's there for.
Hardcode "CONFNAME=university" in the openvpn@university/conf file, and your run script would source ./conf .

Why not in the service directory? Because you're duplicating information.
In openvpn@university the ./config file will be "CONFNAME=university"
In openvpn@home the ./config file will be "CONFNAME=home"

You say that run scripts inherit their environment from the supervision tree only.
What exactly does that mean?

skarnet wrote:
As I said, you will not be able to use the same service directory for two openvpn instances with different configurations. You need two service directories. You can reuse the run script if you want, but you have to put the configuration somewhere.

Yes you will, unless you work hard to prevent that.
You said that "The ./run script is run with its current directory set to the service directory it is in". This means you can do
Code:
SVNAME=$(basename $PWD)
CONFNAME=${SVNAME#*@}
exec /usr/bin/openvpn --config /etc/openvpn/${CONFNAME}.conf


Edit: To clarify
The setup is
Code:
svdir/opensvn@/
svdir/opensvn@university -> opensvn@
svdir/opensvn@home -> opensvn@

The above approach will work as long as s6 does not resolve the symlinks opensvn@university and opensvn@home, but treats them as regular directories.
Back to top
View user's profile Send private message
mv
Watchman
Watchman


Joined: 20 Apr 2005
Posts: 6747

PostPosted: Sat Aug 30, 2014 3:23 pm    Post subject: Reply with quote

Dr.Willy wrote:
you started talking about "service generators" at the beginning of page two and now, halfway through page three I still have no fucking clue what that is supposed to be and what purpose that is supposed to have.

I think I described rather precisely what they are and what they are good for. If it is still unclear, have a look at the systemd wiki or look for further examples in /usr/lib/systemd/system-generators. The latter is of course only possible if you install systemd, but if you want to discuss about init systems, you should know what init systems currently do provide.
To avoid a misunderstanding: I do not think the generator solution of systemd is good. Something more flexible would be better. But currently s6 has neither.
Back to top
View user's profile Send private message
skarnet
n00b
n00b


Joined: 01 Jul 2014
Posts: 25
Location: Paris, France

PostPosted: Sat Aug 30, 2014 5:26 pm    Post subject: Reply with quote

Dr.Willy wrote:
Yes you will, unless you work hard to prevent that.

You won't. It's just the way s6 works. One service directory per service instance.
s6-supervise actually stores its information into the supervise/ subdirectory of the service directory. Information about a service instance.
When s6-supervise is running on a service directory, it takes a lock, and you cannot run a second instance on the same directory. It's a choice: the directory represents an instance, not a class. If you want to have several instances, duplicate the directory. The ./run and ./finish files in the directory can be symlinks , if you wish - but most run scripts are short enough that it does not matter.
_________________
Laurent
Back to top
View user's profile Send private message
steveL
Watchman
Watchman


Joined: 13 Sep 2006
Posts: 5153
Location: The Peanut Gallery

PostPosted: Mon Sep 01, 2014 9:06 am    Post subject: Reply with quote

Dr.Willy wrote:
OpenRC does not supervise services.
OpenRC has to resort to pidfiles to figure out which process actually belongs to which service.

The first part is pretty trivial, in the overall scheme of things.

As for pidfiles, they're a hack, and don't actually work unless the daemon cooperates, or we wrap it with ssd which can still be subverted. That's why cgroups were invented, ages before systemd came around with its clever idea of copying launchd. Cgroups are a great idea, though systemd present them as basically their invention, and their turd as the only arbiter of what the admin can do.

I helped qnikst add support for cgroups to openrc last year; though there were concerns over what the systemd-led changes will mean for future work, istr a Ted T'so mail about maintaining support for existing mechanisms.

If you want more background on process supervision, greycat's page is very good; Q 3.2 is very relevant to this.
Quote:
s6 does not provide a dependency mechanism.

No, and that's the more significant part, though again not as hard as people seem to think. It is, however a game-changer, and the openrc format is the best I've seen.

Essentially what we're talking about is adding at least one daemon process to monitor others (which may in turn require a monitor per-service.) If we're doing that, then it can react to other events. Though wrt network, I think we should rely on the dhcpcd work Roy has done, after he wrote openrc. That can take care of wpa-supplicant for us too (as discussed on the second page.)
Back to top
View user's profile Send private message
mv
Watchman
Watchman


Joined: 20 Apr 2005
Posts: 6747

PostPosted: Mon Sep 01, 2014 4:04 pm    Post subject: Reply with quote

steveL wrote:
As for pidfiles, they're a hack, and don't actually work unless the daemon cooperates,

Even if the daemon does cooperate, they are not reliable: The daemon can go down, and another process can get the same PID by accident. Not very likely, but possible in theory. In the very least, there is a theoretical race condition either this or that way.
Back to top
View user's profile Send private message
Dr.Willy
Guru
Guru


Joined: 15 Jul 2007
Posts: 547
Location: NRW, Germany

PostPosted: Mon Sep 01, 2014 4:51 pm    Post subject: Reply with quote

mv wrote:
Dr.Willy wrote:
you started talking about "service generators" at the beginning of page two and now, halfway through page three I still have no fucking clue what that is supposed to be and what purpose that is supposed to have.

I think I described rather precisely what they are and what they are good for. If it is still unclear, have a look at the systemd wiki or look for further examples in /usr/lib/systemd/system-generators. The latter is of course only possible if you install systemd, but if you want to discuss about init systems, you should know what init systems currently do provide.
To avoid a misunderstanding: I do not think the generator solution of systemd is good. Something more flexible would be better. But currently s6 has neither.

I apologize for sounding rude earlier and i mean on offense here, but requirements and specifications don't seem to be part of your job profile.
There is nothing precise about statements like "Applications range from reacting to new connected hardware or new net devices over things like dynamically adding new software (e.g. I have here the "squashmount" script in mind which might to be started on dynamically added new directories) up to setting up VMs/chroots on dynamic new locations."
Why do I want to react to newly connected hardware? What do I want to do when new hardware is connected? How do I want to implement this reaction? Why is this implementation the best to solve the problem?
You kindof answered the 3rd question and list areas where you see that implementation being used. If I could look into your head and see what problems you have in mind, which solutions you considered, which of them you discarded and why this would all make sense; but I can't.
All I see is that after you explain how s6-schedule could look like, you argue here that "In a forcibly static model, all possible checks must be coded in $dir/run itself." when in fact you can seperate the ./run script from it's configuration.
So in lights of that misunderstanding I kept wondering whether dynamically generated services are needed at all. You say they are.

skarnet wrote:
s6-supervise actually stores its information into the supervise/ subdirectory of the service directory.
[…] It's a choice: the directory represents an instance, not a class.

Ah right, the supervise subdirectory.

steveL wrote:
Dr.Willy wrote:
OpenRC does not supervise services.
OpenRC has to resort to pidfiles to figure out which process actually belongs to which service.

The first part is pretty trivial, in the overall scheme of things.

As for pidfiles, they're a hack, and don't actually work unless the daemon cooperates, or we wrap it with ssd which can still be subverted. That's why cgroups were invented, ages before systemd came around with its clever idea of copying launchd. Cgroups are a great idea, though systemd present them as basically their invention, and their turd as the only arbiter of what the admin can do.

I helped qnikst add support for cgroups to openrc last year; though there were concerns over what the systemd-led changes will mean for future work, istr a Ted T'so mail about maintaining support for existing mechanisms.

If you want more background on process supervision, greycat's page is very good; Q 3.2 is very relevant to this.

So basically the idea is to make start-stop-daemon a supervisor?

steveL wrote:
Quote:
s6 does not provide a dependency mechanism.

No, and that's the more significant part, though again not as hard as people seem to think. It is, however a game-changer, and the openrc format is the best I've seen.

I agree with the game changer part. Simply leaving the issue unresolved is not really an option.

In what regard do you think openrc's format is the best format?
The way I see it, it is mostly comes down to functions in a file vs files in a directory. You could implement openrc's dependency related functions as programs and get pretty much the same results (see the need/provide programs here)
Of course whether that is the best way to handle dependencies in s6 is skarnet's decision.

steveL wrote:
Essentially what we're talking about is adding at least one daemon process to monitor others (which may in turn require a monitor per-service.) If we're doing that, then it can react to other events. Though wrt network, I think we should rely on the dhcpcd work Roy has done, after he wrote openrc. That can take care of wpa-supplicant for us too (as discussed on the second page.)

Yeah, I'm running a dhcpcd-only networking setup myself and have to say it's pretty cool.
On the other hand I do wonder if it is neccessary to have two supervisors, or whether that is just implementing the same thing twice.
We already have the situation where we can react on wireless events via wpa_supplicant hooks or via dhcpcd hooks.
Back to top
View user's profile Send private message
depontius
Advocate
Advocate


Joined: 05 May 2004
Posts: 3509

PostPosted: Mon Sep 01, 2014 5:07 pm    Post subject: Reply with quote

steveL wrote:

I helped qnikst add support for cgroups to openrc last year; though there were concerns over what the systemd-led changes will mean for future work, istr a Ted T'so mail about maintaining support for existing mechanisms.


Please explain more what was added? I know the cgroup stuff is mounted, but is there more than that. I need to read your link on process management before asking more.
_________________
.sigs waste space and bandwidth
Back to top
View user's profile Send private message
mv
Watchman
Watchman


Joined: 20 Apr 2005
Posts: 6747

PostPosted: Tue Sep 02, 2014 10:55 pm    Post subject: Reply with quote

Dr.Willy wrote:
Why do I want to react to newly connected hardware? What do I want to do when new hardware is connected? How do I want to implement this reaction? Why is this implementation the best to solve the problem?

These are completely irrelevant points to the answer of your original question.
Your original question was like asking "why would I need to store data on a harddisk" and I give you see answer "for many reasons; for instance, to store texts you typed for later printing or data for later comparison" and you complain that I did not give a complete analysis of why text editors are useful and whether it might be possible to implement them without harddisks. Nothing of that needs to be discussed for the decision whether storing data on a harddisk is useful: The latter is almost obvious by itself. The same holds for the possibility of dynamically managing services - a few examples where this is used is clearly sufficient, and reacting on newly connected hardware is one of several natural examples I gave.
Back to top
View user's profile Send private message
Dr.Willy
Guru
Guru


Joined: 15 Jul 2007
Posts: 547
Location: NRW, Germany

PostPosted: Wed Sep 03, 2014 2:17 pm    Post subject: Reply with quote

mv wrote:
Your original question was like asking "why would I need to store data on a harddisk" and I give you see answer "for many reasons; for instance, to store texts you typed for later printing or data for later comparison" and you complain that I did not give a complete analysis of why text editors are useful and whether it might be possible to implement them without harddisks. Nothing of that needs to be discussed for the decision whether storing data on a harddisk is useful: The latter is almost obvious by itself.

Ah, this is a good example, because I hope I can clarify the (my) problem with your responses:
In a discussion on "the decision whether storing data on a harddisk is useful" the question "why would I need to store data on a harddisk" needs two points to be answered:
a) why do I need to store data (what needs to be done: store data)
b) why do I need a hard disk (how does it need to be done: on a harddisk)
As I hope you can see answering "whether it might be possible to implement them without harddisks" is not irrelevant, because it refers to b), while more examples only give more answers to a).
Depending on the response to b), we might decide that it is better to store the data in ram instead of on the harddisk.

I admit I should have been more precise earlier but I hope I am now:
"why do we need to react to new net devices by dynamically generating services"
a) why do we need to react to new net devices (I can figure, but a specific example helps more than 5 general ones)
b) why do we need to dynamically generate services

mv wrote:
The same holds for the possibility of dynamically managing services - a few examples where this is used is clearly sufficient

A single use case it sufficient, as long as it is a critical one.
If we agree that reacting to new net devices is critical, then that part of the discussion is over.
The question that remains is how we want to do this.
Back to top
View user's profile Send private message
mv
Watchman
Watchman


Joined: 20 Apr 2005
Posts: 6747

PostPosted: Thu Sep 04, 2014 11:39 am    Post subject: Reply with quote

I am sorry, but I do not want to waste my time with pointless discussions. It is obvious that it is a serious limitation if you have no permanent storage, as it is obvious that an init system is seriously limited if it cannot manage services dynamically. Discussing how useful it is (permanent storage or e.g. using a network stick without permanent configuration) and whether perhaps you can somehow hack around the limitation is completely irrelevent for the observation that it is a serious limitation of an init-system if it lacks this possibility. You are free to use cat >file as your editor, because you can prove that there are some ways to do everything with it.
Back to top
View user's profile Send private message
Dr.Willy
Guru
Guru


Joined: 15 Jul 2007
Posts: 547
Location: NRW, Germany

PostPosted: Thu Sep 04, 2014 11:48 am    Post subject: Reply with quote

mv wrote:
I am sorry, but I do not want to waste my time with pointless discussions

They are pointless because you keep missing and missing and missing the point.
I think I've explained the issue as best as I could, if you still don't understand I think we just have to agree that we don't understand each other's english.
Back to top
View user's profile Send private message
Wing0
n00b
n00b


Joined: 19 Mar 2014
Posts: 4
Location: India

PostPosted: Thu Sep 04, 2014 8:46 pm    Post subject: Reply with quote

For the short term, systemd will be winning as most of the Linux distros get converted.
But in the long term, I think Upstart will win the init war.

Chrome OS had a limited library of apps, where as now it will have access to Google Play Android apps. This alone should propel Google Chrome to surpass GNU/Linux market share, especially with Google marketing power and OEMs shipping the OS. Chromium/Chrome OS uses Upstart for init, initially developed by Canonical for Ubuntu. Personally, I don't see why Ubuntu switched over to systemd from Upstart.

Current market share (based on online usage, from StatCounter):
Google Chrome: 0.13%
Google Android: 17.34%
GNU/Linux: 1.22%
_________________
Free Your Mind !
Back to top
View user's profile Send private message
Fitzcarraldo
Advocate
Advocate


Joined: 30 Aug 2008
Posts: 2034
Location: United Kingdom

PostPosted: Fri Sep 05, 2014 11:00 am    Post subject: Reply with quote

Wing0 wrote:
For the short term, systemd will be winning as most of the Linux distros get converted.
But in the long term, I think Upstart will win the init war.

I seriously doubt that Upstart will win 'the init war'. To begin with, systemd is not just an init system, it has grown into much more than that. I think more of the seasoned GNU/Linux users are waking up to the impact of what Lennart Poettering and his fellow systemd 'cabal' (his word) members are doing. They are re-designing GNU/Linux from the ground-up, and it would not be an exaggeration to call the end result systemd/Linux, a de facto new OS. Personally I'm not interested in using that, but I think the vast majority of existing users or newcomers to 'Linux' -- to use the popular term for the OS -- simply don't care. As long as it works out-of-the-box, that's all that matters to most people.

And as far as distribution developers are concerned, the status quo says it all: most have already switched their distributions to systemd or have committed to doing so in the future. I suspect it is because they see systemd as being easier to maintain, thus making their lives easier, and they also see systemd as being easier for their desktop users (and with a boot time closer to Windows 8/8.1 Fast Startup). Ultimately the systemd 'cabal' will effectively make the different package managers obsolete too, as the latest plan shows.

In my opinion, the end result of the radical systemd redesign of the OS could be greater take-up of the OS on the desktop, but the distinction between distributions will lessen and many will cease to exist, as there will be little point choosing one over another. The greater commonality and the larger user base of systemd/Linux will attract more crackers than at present. The de facto new OS will be much more like Windows or Mac OS than the current GNU/Linux.
_________________
Clevo W230SS: amd64, VIDEO_CARDS="intel modesetting nvidia".
Compal NBLB2: ~amd64, xf86-video-ati. Dual boot Win 7 Pro 64-bit.
OpenRC udev elogind & KDE on both.

Fitzcarraldo's blog
Back to top
View user's profile Send private message
mayak
n00b
n00b


Joined: 16 Jul 2013
Posts: 26

PostPosted: Fri Sep 05, 2014 1:05 pm    Post subject: Reply with quote

Fitzcarraldo wrote:
In my opinion, [...]


Thanks Fitzcarraldo - this is exactly how I would guess/predict the future of systemd/Linux.
I really hope there is enough manpower to keep the well-oiled GNU/Linux userland alive.
Otherwise we have to team up with the BSD guys and bring the BSD userland to Linux.
BSD/Linux so to say ;-)
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Gentoo Chat All times are GMT
Goto page Previous  1, 2, 3, 4  Next
Page 3 of 4

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum