Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
RFC: xbus design notes
View unanswered posts
View posts from last 24 hours

Goto page 1, 2  Next  
Reply to topic    Gentoo Forums Forum Index Gentoo Chat
View previous topic :: View next topic  
Author Message
steveL
Watchman
Watchman


Joined: 13 Sep 2006
Posts: 5153
Location: The Peanut Gallery

PostPosted: Sun Jun 07, 2015 1:57 am    Post subject: RFC: xbus design notes Reply with quote

Just putting this out there, wrt a discussion with mv, depontius and the usual suspects. ;)

It's pure draft, vapourware, nothing to see here etc. Only interesting if you're into that sort of technical discussion (but doesn't fit any other forum afaict.)
Code:
# public-domain (by notadev & igli, if it matters where you live.)
# - all copyright is disclaimed, where applicable.
# if you can implement it, we'll sure as hell use it ;-)

Firstly let's take a look at standard IPC:
Code:
IPC:
     PID_a   |     kernel     | PID_b
             |  PERM          |
     [proto] |      FTAB      |  [proto]
     [send] [fd]    pipe   =>[fd] [recv]
             | TX   msgq   RX |
             |     sockbuf    |
Note that a msgq does not necessarily use an fd, though it does in Linux, and it makes sense to be able to select/poll one, though there are ofc POSIX Realtime (POSIX.4) mechanisms in-place for notification.

The important thing here is that the processes don't (necessarily) know each other, and address-space isolation is critical.
Since we have no idea where the data is coming from (in terms of alignment), nor where it's going to end up (because userland supplies the buffer write addresses), there is no real choice but to copy from process to process.
We've discussed mapped-msgq's (a standard implementation technique) using a slightly different API, to avoid any copying, which is still something we'd like to pursue. But let's crack on with prior-craft.

A pipe is the most basic of the above; you should be familiar with them from shell (or if not, /autojoin #bash for a few months, to get up to speed with userland.)

It's important to realise that if we're programming "Communicating Sequential Processes", we're likely to start with pipes, and then use shared memory, before we go onto anything else. The other IPC mechanisms are more likely to come into play when we want to provide an abstraction to other, unknown processes. From what I've seen, this one gets overlooked as a mechanism:
Code:
CSP:
     PID_M   |      kernel       | PID_N
  [open,map] |  PERM       PERM  | [open,map]
         <-SYNC-> " [shmem] " <-SYNC->
             |  @/R/W?    @/R/W? |
Yes, this is a lot like threading; it's preferable when you have different code cooperating, and/or you want to keep some isolation, which threads never have.
Again, processes don't need to know each other, though if not they're likely using the same lib-code.
A named shared-memory area is not tied to anyone process; in fact it will stay around until the system reboots (or it is reinitialised) as will other named POSIX IPC mechanisms.

Another thing to bear in mind, is that often we just want event notifications, along with a number or pointer of interest, for which normal IPC mechanisms are a bit heavy. For this, POSIX realtime-signals (with reliable ordering and count) are a perfect match:
Code:
events:
     PID_a   |     kernel     | PID_b
             |  PERM          |
             |      PTAB      | sigaction
 [sigqueue] [pid] int, info* [pid]
             | TX          RX |
             |                |

Anyhow let's take a look at how X does things:
Code:
XORG: (afair; notadev.)
     PID_a   |     kernel/NET  | X                              X |    kernel/NET   | PID_N
             |  PERM           |                                  |  PERM           |
    [Xproto] |       FTAB      | sigaction [Xproto]     [Xproto]  |    FTAB         | [Xproto]
     [send] [fd] -> sockbuf =>[fd] allow?  [recv] MUX  <| [send] [fd]    sockbuf =>[fd] [recv]
             | TX     UCRED RX | SIGIO           queue            | TX           RX |
             |                 |                                  |                 |

The important thing to realise here is that userland mux is both implemented sanely, and necessary for networked operation.

If we take a look at dbust, a mechanism pushed in order to implement a policy of GPL-evasion, we see why it's all kinds of bottleneck:
Code:
DBUS: cf: https://bugs.freedesktop.org/show_bug.cgi?id=33606#c9

     PID_a   |     kernel      | DBUS                            DBUS |     kernel       | PID_N
     [glib]  |  PERM           |                                      |  PERM            |   [glib]
     [dbus]  |       FTAB      | epoll    in  [POLICKY]   out         |        FTAB      |   [dbus]
     [send] [fd] -> sockbuf =>[fd] [recv] queue  MUX <| queue [send] [fd] ->  sockbuf =>[fd] [recv]
             | TX     UCRED RX |                                      | TX            RX |
             |                 |                                      |                  |

A serious problem with it, is that it tries to pretend that one channel can do it all; all the various uses we've already seen, which are "traditionally" implemented by various POSIX mechanisms: designed for slightly different situations, by people who actually knew what they were doing.
The bigger problem of what it's being used for on the local machine, is not addressed by looking at what it does, though as we've seen it was already a busted flush from day one.

More worrying is the recent Damascene-conversion to a POSIX API, rt-signals, worrying because there is an associated land-grab of all RT-signals on a machine from init; as if they've found someone else's ball, it's shinier than theirs, and now are claiming that it is theirs.
This seriously degrades the capability of any machine using such an insanely-designed init-process.

Let's look at a well-designed protocol, specifically for the local-machine/cluster situation.
This is what kdbust is trying to be, but failing miserably: because it tries to reinvent dbus, which tries to reinvent the kernel under RedHat's control.
TIPC is a much saner project, and all it needs is CONFIG_TIPC in your kernel config (you might want to up CONFIG_TIPC_PORTS to 16384 or more):
Code:
TIPC: cf: http://www.spinics.net/lists/netdev/msg190623.html

     PID_a   |   node/CLUSTER       kernel    node/CLUSTER       | PID_N
             |  PERM                                             |
     [proto] |        RX      TIPC            TX        FTAB     |  [proto]
     [send] [fd] -> [port]<= sockbuf MUX <| [port] |> sockbuf =>[fd] [recv]
             | TX                                             RX |
             |                                                   |
# -- seems like an ideal fit for local X (including SO/HO LAN.)

So moving forward, first thing we need to do is get device information out; this is an idea for how we'd replace udev with edev, principally to avoid switching context back into userspace, simply to drop it (ie: use BPF.)
Code:
UMUX: [capnproto: enc/dec]
     DEVICE  |                                       kernel                  node   UMUX <=ctl=> | edev: config
             |                             NET -> rt-netlink -> dhcpcd                           | e(disk|power)
             |     DRIVER [NET: mactab]      | AUDIO -> "    -> ejack-ctl [sigqueue]     FTAB    |    [capndec]
     [IRQ] [port]<= bhalf NOTIFICATION CHAIN | BPF | UMUX [capnenc] TIPC <| [port] |> sockbuf =>[efd] [recv]
             | TX    ...                   handle? Y                  MUX                     RX | ejack-data
             |                               N: drop                                             | PID_N

We've had some disagreement about encoding kernel-side, but the argument about variable-length data implying the rest, won. (*huh*;)
Andy Lutomirski recommended capnproto on the lkml (sorry, don't recall url; came up in other thread/s) and it rocks, so let's use it.

Secondly we need a pub/sub channel for userland to react to kernel events, after they've been handled above, and userland to talk to each other. So let's use TIPC as it's much better than anything else, and we both feel nothing else is going to be any better, or it'll end up being effectively the same thing:
Code:
UBUS: cgroups: delete-on-empty and: delete-and-notify
   UDP socket for control events; UCRED required to connect [config: port, daemon per namespace; eg: ejack]
   standard TIPC for events (UMUX: recv as above)
   ORB: [capnproto: RPC]
     PID_a      |       node           kernel   node                | PID_N
                |  PERM                                             |
  [capnproto]   |                       TIPC               FTAB     | [capnproto]
     [send]    [fd] -> [port]<= sockbuf MUX <| [port] |> sockbuf =>[fd] [recv]
                | TX                                             RX |
                |                                                   |

Hope that makes sense. No doubt it'll be tweaked, so don't throw eggs. ;)

If you want to learn how to code with the standard userbase, see the books link from /topic ##posix. If you don't have a copy of Gallmeister, you won't get far.

Note: PERM means the kernel permission-check, which is going to happen in any case.

edits: explanation, links, sperling ;) Lutomirski not Mitorski.
"ubus" is taken; the project is very relevant though.


Last edited by steveL on Thu Aug 27, 2015 3:05 pm; edited 13 times in total
Back to top
View user's profile Send private message
augustin
Guru
Guru


Joined: 23 Feb 2015
Posts: 318

PostPosted: Sun Jun 07, 2015 8:41 am    Post subject: Reply with quote

I have a *lot* to learn before I understand any of this, but it does sound interesting.


[/me subscribe to thread]
Back to top
View user's profile Send private message
steveL
Watchman
Watchman


Joined: 13 Sep 2006
Posts: 5153
Location: The Peanut Gallery

PostPosted: Wed Jun 10, 2015 5:06 pm    Post subject: Reply with quote

Thanks augustin; edited to add a bit of explanation.

HTH.
Back to top
View user's profile Send private message
steveL
Watchman
Watchman


Joined: 13 Sep 2006
Posts: 5153
Location: The Peanut Gallery

PostPosted: Wed Jun 10, 2015 5:23 pm    Post subject: Reply with quote

Just copying some notes in here, so they're immediately-visible:
Ying Xue wrote:
TIPC can both support two modes: single node mode and network mode. If we hope all applications can easily to talk each other locally, let TIPC just work under single node mode. [the default]
  • It can guarantee multicast messages are reliably delivered in order
  • It can support one-to-many and many-to-many real-time communication within node or network.
  • It also can support location transparent addressing; allows a client application to access a server without having to know its precise location in the node or the network.
    If D-Bus uses this schema, I believe the central D-Bus daemon is not necessary any more. Any application can directly talk each other by one-to-one, one-to-many, and many-to-many way.
  • TIPC also has another important and useful feature which allows client applications to subscribe one service port name by receiving information about what port name exist within node or network. For example, if one application publishes one system bus service like {1000, 0, 500}, any client applications which subscribe the service can automatically detect its death in time once the application publishing {1000, 0, 500} is crashed accidentally.
Q: Can I just create a AF_TIPC socket on this machine and just make it work without any further setup?
A: no extra configuration if you expect it just works in single node mode.

When publishing service name, you can specify the level of visibility, or scope, that the name has within the TIPC network: either node scope, cluster scope, or zone scope. So if you want it is just valid locally, you can designated it as node scope, which [means] TIPC then ensures that only applications within the same node can access the port using that name.

Note the provenance:
ticp website wrote:
The Transparent Inter-Process Communication protocol allows applications in a clustered computer environment to communicate quickly and reliably with other applications, regardless of their location within the cluster.

TIPC originated at the telecommunications manufacturer, Ericsson, and has been deployed in their products for years. The open source TIPC project was started in 2000 when Ericsson released a Linux version of TIPC on SourceForge.

A project Working Group was created in 2004 to oversee the project and direct the evolution of the TIPC protocol.

No doubt "I believe the central D-Bus daemon is not necessary any more," is why it wasn't pursued. ;)

But that is exactly the right solution, whatever underlying mechanism/s we use. Let the kernel multiplex, as that's what it's for.
If you must mux in userland, do it like the X11 server does, and limit the processing to domain-specific data.
TIPC spec wrote:
These assumptions[1] allow TIPC to use a simple, traffic-driven, fixed-size sliding window protocol located at the signalling link level, rather than a timer-driven transport level protocol. This in turn leads to other benefits, such as earlier release of transmission buffers, earlier packet loss detection and retransmission, earlier detection of node unavailability, to mention but some.

Of course, situations with long transfer delays, high loss rates, long messages, security issues, etc. must also be dealt with, but from the viewpoint of being exceptions rather than as the general rule.

The mcast part is interesting (remember this is just a data channel, not a control one):
Quote:
MCAST_MSG messages are also subject to name table lookups before the final destinations can be determined.
  • A first lookup is performed, unconditionally, on the sending node. Here, all node local matching destination ports are identified, and a copy of the message is sent to each and one of them.
  • N/A At the same time, the lookup identifies if there are any publications from external, cluster local, nodes. If so, a copy of the message is sent via the broadcast link to all nodes in the cluster.
  • N/A At each destination node, a final lookup is made, once again to identify node local destination ports. A copy of the message is sent each and one of them.
  • If any of the found destination ports have disappeared, or are overloaded, the corresponding message copy is silently dropped.

I'd rather drop connections, but haven't read deep enough yet, to understand how we'd deal with the exceptional situations; hopefully we can get notifications of overloaded ports/processes. (I'd be surprised if not.)
Quote:
Process Overload Protection
TIPC must maintain a counter for each process, or if this is impossible, for each port, keeping track of the total number of pending, unhandled payload messages on that process or port. When this counter reaches a critical value, which should be configurable, TIPC must selectively reject new incoming messages.
Which messages to reject should be based on the same criteria as for the node overload protection mechanism, but all thresholds must be set significantly lower. Empirically a ratio 2:1 between the node global thresholds and the port local thresholds has been working well.

In any event, the less drastic model is just to do what they're doing, so I'm happy to go with their wealth of experience ;)

The exceptional cases are nonetheless where we might add policy knobs, like banning a uid, or mandating that certain channels must be kept up with, or the reader will be dropped (and must reconnect.) And as stated, we'd have a control channel for fd passing, and perhaps system-level state-notifications (though we might want RT signals for some things.)

[1]Assumptions (all applicable to localnode, or simply not an issue):
Quote:
TIPC has been designed based on the following assumptions, empirically known to be valid within most clusters.
  • Most messages cross only one direct hop.
  • Transfer time for most messages is short.
  • Most messages are passed over intra-cluster connections.
  • Packet loss rate is normally low; retransmission is infrequent.
  • Available bandwidth and memory is normally high.
  • For all relevant bearers packets are check-summed by hardware.
  • The number of inter-communicating nodes is relatively static and limited at any moment in time.
  • Security is a less crucial issue in closed clusters than on the Internet.

IP checksumming is already carried out by network hw, and I'd hope that's either taken care of in the existing kernel impl, or simply not used for internal-node comms. Either way it's (hopefully) SEP. ;)
Back to top
View user's profile Send private message
steveL
Watchman
Watchman


Joined: 13 Sep 2006
Posts: 5153
Location: The Peanut Gallery

PostPosted: Wed Jun 10, 2015 5:25 pm    Post subject: Reply with quote

Addressing:
Ying Xue wrote:
The basic unit of functional addressing within TIPC is the port name, which is typically denoted as {type,instance}. A port name consists of a 32-bit type field and a 32-bit instance field, both of which are chosen by the application.

Often, the type field indicates the class of service provided by the port, while the instance field can be used as a sub-class indicator. Further support for service partitioning is provided by an address type called port name sequence.

This is a three-integer structure defining a range of port names, i.e., a name type plus the lower and upper boundary of the instance range. This addressing schema is very useful for multicast communication.

For instance, as you mentioned, D-Bus may need two different buses, one for system, another for user. In this case, when using TIPC, it's very easy to meet the requirement:

We can assign one name type to system bus, and another name type is to user bus. Under one bus, we also can divide it into many different sub-buses with lower and upper.

For example, once one application publishes one service/port name like {1000, 0, 1000} as system bus channel, any application can send messages to {1000, 0, 100} simultaneously.

One application can publish {1000, 0, 500} as sub-bus of the system bus, another can publish {1000, 501, 1000} as another system sub-bus.

If a process sends a message to port: {1000, 0, 1000}, it means the two applications including published {1000, 0, 500} and {1000, 501, 1000} both receive the message.

So I guess you would want to provide a minimal lib that does the client side, along with a daemon ORB (assuming it were in fact necessary to optimise userland dcop.)
Quote:
If you use this schema, a central daemon is not necessary. Any application can directly talk to any other: one-to-one, one-to-many, and many-to-many.
Back to top
View user's profile Send private message
steveL
Watchman
Watchman


Joined: 13 Sep 2006
Posts: 5153
Location: The Peanut Gallery

PostPosted: Wed Jun 10, 2015 5:29 pm    Post subject: Reply with quote

What's all this about dcop?
KuroNeko wrote:
I didn't try it, but dcop sounds awesome, for how it was. Script your desktop? Yes please!
I could even understand, that you would like to push some commands to a terminal (konsole) or to a text editor like kate, so they user can look over it, and decide what to do. That would be awesome for learning and customizing!
Could you do that with dcop? I found a few examples, where it had been done, but I didn't use KDE at that time, sadly...
Can you do that with dbus? Maybe? I have a hard time figuring out, how to handle dbus on the commandline. Where is KDCOP or whatever you would call it, and why isn't it actively promoted?

When 4.x was being conceived, everyone wanted to collaborate across-desktops. So KDE agreed to use dbus, and forget about dcop.

It's a shame really, as dcop had been in-place since KDE-2 (if you know KDE, two series is a long time) and was much nicer.
For instance you had kommander which enabled quick and efficient scripting, which was a step up from kaptain (Qt based) to make nice user-interfaces.

Thanks for querying this; I had a bookmark for kaptain, but had deleted stuff about kommander. As you'll see the docs for the latter are rather threadbare (no tutorial examples.) The ALSA MIDI Kommander (which I just found) is a nice use-case.

So officially it's "dead" technology, and no doubt sneered at by dbus-fanbois. But it was so much more useful; as the last url shows, people actually used it to "applify" their desktop under their own control. I've not seen anything like the same sort of end-user adoption with dbus.

It rocked, and I was sorely disappointed when nothing similar appeared on 4.x, as I'd only discovered kommander during 3.5.x (which was a sweet spot, much like 4.10 without semantic-craptop.)
--
I make no apologies for my opinions on eg: semantic-desktop. If you want to disagree with them, please start another thread about the specific aspect you wish to disagree with, explaining why you think technology-X is a Good Thing(tm).
Note it will be about that technology, and not the subject of this thread, so it does not belong here.
Back to top
View user's profile Send private message
depontius
Advocate
Advocate


Joined: 05 May 2004
Posts: 3374

PostPosted: Thu Jun 11, 2015 8:37 am    Post subject: Reply with quote

Rhetorical question...

My understanding of kdbus is that nobody uses it directly, they use it through libdbus. I'm under the impression that libdbus + kdbus are pretty much a direct replacement for the current dbus daemon.

Is it possible for libdbus to be implemented, as-is, on top of tipc?
(I know that it might well be messier than necessary for use as an IPC, but could it be done, as simply as anything can be done with dbus?)

I know your first response will be annoyance, but there is another reason for asking this question.
_________________
.sigs waste space and bandwidth
Back to top
View user's profile Send private message
tclover
Guru
Guru


Joined: 10 Apr 2011
Posts: 516

PostPosted: Thu Jun 11, 2015 9:13 am    Post subject: Reply with quote

First, a part of an eye is on this... still, busy with other stuff. Second point, any input on how to implement the IPC "service" to facilitate communication and un/subscription to... services besides being, obviously, able to communicate (single-to-single/many many-to-many/single) to processes. (There is a hint on kDbus thread with with quick exchange if you had the time to process the input (irony).)

depontius wrote:
Is it possible for libdbus to be implemented, as-is, on top of tipc?

I am guessing you're looking for the right cutting tool to be able to cut the grass efficiently (under RH feet)? You should also consider a kind of aggressive PR for this, because, the opposition will certainly be fierce. --It should be clear since years ago that some (good(tm)) technologies are not pushed because of their inherent goodness(tm);--old pulse's and systemdebug's song, even Linus acknowledged the need of a sound mixer capable desktop which are/were fairly supported on ALSA/OSSv4 (no idea for <OSSv4) for ages... a decent audio server? JACK (supposedly difficult to set up and configure according to Linus) does this for ages as well. So what with the old pulseaudio justifications? SystemDebug justifications for fast boots with crap/bloat ware?;--the creditors of those (good(tm)) things and supporters have a few things on their baskets for sure.
_________________
home/:mkinitramfs-ll/:supervision/:e-gtk-theme/:overlay/
Back to top
View user's profile Send private message
steveL
Watchman
Watchman


Joined: 13 Sep 2006
Posts: 5153
Location: The Peanut Gallery

PostPosted: Thu Jun 11, 2015 6:19 pm    Post subject: Reply with quote

depontius wrote:
Rhetorical question...

Oh God, the worst sort; especially when you don't back up the rhetoric with anything substantive.
Quote:
Is it possible for libdbus to be implemented, as-is, on top of tipc?

Yes.

That does not address the fact that dbust is a bad idea, and however well you work it, you cannot shove all the various types of data down one channel, promising the undeliverable and forcing ballooning RAM requirements, "because dbus."

Someone else might want to pursue that; I have no interest in it, nor in supporting GPL-evasion.

In fact, we need to address the "functional interface has the same legal effect as linkage" aspect for ubus; current thinking was to replicate the kernel-module impl, in terms of flagging GPL.X (possibly +) modules.

Did you not take in that post from Torvalds, where he finally came out and stated flat out that "this is bad userspace coding"?
Quote:
I know your first response will be annoyance, but there is another reason for asking this question.

Then courtesy really does dictate that you provide that reason upfront.

All that needs to happen is to look at the uses of dbus, and start separating by domain, timeliness and other requirements.

But yes, in general we're looking at providing the same functionality; just not munged into one inappropriate bottleneck, not replacing the kernel, and certainly not trying to use anything more complex than PAM; no polickyshit, for sure, and definitely no js in base-system.

More work is going to be needed in C than anything else; while we may use the same encoding and format as capnproto, it can't be in C++; this is UNIX.

So the dcop case is certainly in-scope; but I'd rather go back to dcop and the GNOME ORB from when kde-4 was under discussion, in terms of API provided to apps (dcop), and performance.
I don't see dbust as anything other than a massive dead-end, especially given how crap it is compared to the uses of dcop people got up to.

By all means provide your reason; but I'd ask any conversation on this topic goes to another thread thereafter, as the merits of your plan, which we've discussed to death before, have nothing to do with this, imo.

We're a long way from userland desktop apps, and providing the same crappy mechanism as dbust would be a colossal waste of time, as well as a real motivation-killer.

Who wants to play catch-up with a crappy design, that we know is crappy from the beginning, only to be vilified by propaganda and smirked at in the same way that Winbloze-fanbois smirk at Unix? /rhetoric; hopefully explanation is not necessary given above.

edit: apologies if that sounds harsh; what I'm really getting at is: if you have TIPC, you don't need dbust (the RPC^W IPC mechanism.)


Last edited by steveL on Thu Jun 11, 2015 6:57 pm; edited 1 time in total
Back to top
View user's profile Send private message
steveL
Watchman
Watchman


Joined: 13 Sep 2006
Posts: 5153
Location: The Peanut Gallery

PostPosted: Thu Jun 11, 2015 6:42 pm    Post subject: Reply with quote

tclover wrote:
any input on how to implement the IPC "service" to facilitate communication and un/subscription to

Hmm sure this came up before in discussion with mv about inetd and socket descriptors in perl.

See the books link and get hold of both volumes of UNP; you also need APUE, Gallmeister and Butenhof, as well as Lewine for the base API, but start with UPE, after the awkbook and K&R2, as well as time in ##c learning how to use it portably.

We get quite a few people in ##posix who need redirection to ##c because it's clear they don't know the basics, yet.

If you want to learn it in-depth, I recommend getting the original UNP(1990) as well; Bach and Lions are essential background too.
Quote:
... services besides being, obviously, able to communicate (single-to-single/many many-to-many/single) to processes.

Yeah, that channel we have with TIPC; I finally enabled the config in kernel, but I still need to upgrade userland.

I'd tend toward not having one MCP for every service; rather split out according to domain, as different domains typically have different ordering and scheduling constraints.

We also need to bear in mind that the admin should be able to configure what channel is used, positing some hypothetical easy-IPC lib, and assuming that the POSIX libc really does need further wrapping.
Back to top
View user's profile Send private message
steveL
Watchman
Watchman


Joined: 13 Sep 2006
Posts: 5153
Location: The Peanut Gallery

PostPosted: Fri Jun 12, 2015 5:18 pm    Post subject: Reply with quote

@depontius: My apologies. What you're reading is not annoyance, it's frustration, as my headspace wrt this is taken up with how we're going to sort out the kernel-side, even if I'm not going to do much/any of it, and how the low-level interface with userspace, which is also the low-level ubus interface, is going to work.

My conclusion above ("if we have TIPC, we don't need dbust-RIPC-ORB") is essentially what your question is premised upon.

The issue is that dbust is a bad design, because it promises things no system can deliver, and still stay responsive; the killer example for us was that it effectively requires an infinite-RAM machine, where a slow reader can hang around forever and bottleneck everything else, because we can't just drop the connection, because we're nubs who promised the undeliverable.

This was discussed in the other thread, wrt earlier discussion with kernel-netdev bods, who'd told them they needed to correct that obvious deficiency, only to be hand-waved away.
So there's no way we want to make the same design mistake, and if you recall our earlier discussions, I was always of the opinion that we should just disconnect slow readers, though TIPC simply drops the message which is gentler.

Again, though this comes into userspace policy, which I don't see as our concern; we only provide mechanism to implement whatever policy the admin decides, which by definition cannot be known when we're coding, only verified somewhat at runtime.

In any event, it means we're not going to provide the same guarantees as libdbus, because they're insane guarantees to think you can promise. Nor are we interested in supplanting eg RT-signals, to send 5000 tiny event notifications at startup: first and foremost don't send so many.
Given that, I don't see how we can provide the same thing, though we can ofc provide the same effective functionality, for sane userspace apps.

Yes, that can be done with a libdbust wrapper to provide the same API, but you are then playing ABI catchup games, and effectively back in the old pit saellevan outlined to you several times: catchup with Windoze-wannabes who play all the Microserf-style dirty-tricks they can to make it harder, for which no-one will thank you in any case.
And really they're terrible people to emulate; that patch being needed is indication of someone committing incredibly basic programming errors; thinking errors, which means they're not thinking things out before they rush ahead. We find that attitude alarming.

Anyone who's got any substantive experience in programming knows the truth of the adage: "Thinking is 70% of the game."

Back on point, the capnrpc-protocol over TIPC, is the equivalent of k/dbust.
Not every domain is going to want to use it, (or you're likely not dealing with domain experts) but for sure it works for the desktop ORB case, as that's really what the protocol is designed for (and nothing we can take credit for, ofc.)
Personally, I'd keep systray notifications separate from that though, albeit with a shared, stable ABI like LADSPA, the exemplar for all of the userspace abstractions, if we're doing things properly.
(Though we might prefer C-style naming, not CamelCase, that's SEP;)

The main point is that we're not interested in supporting a badly-conceived userland; if it's borked, correct your code.
In the dbust case, it's unfortunate that the bad design is baked in at what app-coders think of as a system-level layer.

We won't do anyone any favours by pretending otherwise, nor by continuing to do things badly.

But it would be interesting to see what kind of API you wanted to provide, given the constraints implicit in any sort of real-world programming.

@tclover: same applies to you; please feel free to share what you have in mind, since as I said, our focus is on lower-level mechanism; someone else is going to have to apply thinking time to usage.

Bear in mind there are other forms of IPC provided, as discussed above, which will be more suited for certain things (like a control msgq, or an AF_UNIX socket fd for initial daemon connection and fd-receipt.)

Don't suppose anyone has experience of BPF do they?


Last edited by steveL on Fri Jun 12, 2015 5:42 pm; edited 1 time in total
Back to top
View user's profile Send private message
szatox
Veteran
Veteran


Joined: 27 Aug 2013
Posts: 1717

PostPosted: Fri Jun 12, 2015 5:30 pm    Post subject: Reply with quote

Quote:
however well you work it, you cannot shove all the various types of data down one channel, promising the undeliverable and forcing ballooning RAM requirements, "because dbus."

I know, a single channel is a good place for a bottleneck but hey, what benefit you have from a dozen of channels if there is a single CPU carrying them all?
Right, we have more cores than 1 already, so how many cores you need for a single channel IPC to become a limiting factor?

Bonus question: which way (single channel vs independent channels) causes bigger overhead? Does it reverse as the pool grows/shrinks?
Back to top
View user's profile Send private message
steveL
Watchman
Watchman


Joined: 13 Sep 2006
Posts: 5153
Location: The Peanut Gallery

PostPosted: Fri Jun 12, 2015 5:51 pm    Post subject: Reply with quote

szatox: the bit you're missing there is how that ties in with things like scheduling. We cannot use the same single-process bottleneck to mux all our data across various domains, or we shoot ourselves in the foot before we begin.

We let the kernel multiplex various channels, each selected according to the domain, and configurable at runtime, precisely because we just use what POSIX provides (such as standardisation of the sockets API, and AF_UNIX.)

Only the kernel has all the information with respect to things like priority-scheduling, and only it can deal with things like priority-inversion.

That's why dbust is such a dead-end, and why I refer to it as "trying to reinvent the kernel."
Shoving it into the kernel without first recognising that, is simply foolish.

Have a read of Gallmeister, if you're interested. Best guide to POSIX.4 (Realtime API) around, and discusses a range of interesting topics along these lines.
Back to top
View user's profile Send private message
skarnet
n00b
n00b


Joined: 01 Jul 2014
Posts: 25
Location: Paris, France

PostPosted: Sun Jun 14, 2015 3:01 pm    Post subject: Reply with quote

I'm not knowledgeable about object broker protocols - CORBA, DCOP, TIPC or anything else - but when a company made me use D-Bus for a professional project, the first question I asked was: what is it used for, what does it help us accomplish?
At the time, the project manager couldn't tell me more than "it's how the different applications will communicate with one another", but it was enough to get me thinking about it. What do buses do that native Unix communication mechanisms do not?

The short answer is easy: many-to-many communication. Unix IPCs are all one-to-one, one-to-many or many-to-one; many-to-many does not exist natively in Unix, and that's what a bus provides.
Things get a little more complicated when you dig a little deeper. I had a look - actually many, many looks - at the D-Bus specification to try and understand what on earth it is that people find so vital in it. I'll spare you the whole analysis and give you the results directly.
What people usually mean when they say they want a bus, and what they are actually using when they're using D-Bus, is a combination of three different components.
  1. A pubsub mechanism. Processes need a channel to publish information, so that other processes that subscribe to that channel get the information. D-Bus implements this via D-Bus signals.
  2. A RPC mapper. Process want to call functions exported by other processes, and need a way to identify the correct process to send the call to. They also want to export functions that other processes can call. D-Bus implements this via D-Bus methods.
  3. Structured data manipulation and higher level access. Native Unix IPCs only handle strings of bytes. (Or, in one case, file descriptors.) Processes want a way to use structured objects in their RPC calls or information publication; they don't want to bother with low-level details such as marshalling. High-level access primitives should be made available. D-Bus implements this via bindings, i.e. high-level D-Bus access in different languages.
And that's basically it. And these three components make perfect sense. Even as a low-level systems guy that has little need for many-to-many communication, I can very much understand why applications would want those 3 things. More than that probably goes outside the scope of a bus.

So, why is D-Bus bad? For the exact same reason why systemd is bad: it is poorly engineered.
  • It is too big and overly complex for what it does. I suspect I lost more hair tearing away at it while reading the libdbus documentation than in 10 years of aging.
  • It is overengineered and attempts to do too much: it goes beyond the scope. (Two words: D-Bus activation. lol.)
  • It is monolithic.
But the principle of D-Bus isn't bad. The D-Bus protocol itself, what processes exchange on the wire, isn't bad. What is bad is the engineering of the D-Bus software itself: design and implementation.

I really don't understand all this focus on the kernel. Some people absolutely want a bus in the kernel. The kdbus people, of course, and unfortunately they've managed to brainwash Greg KH to their side, but even here - you guys are looking for alternatives, so why all this kernel talk? It is not necessary to implement a bus in the kernel. All that is necessary is a proper, efficient implementation in userspace.
Please read what Linus says about it if you haven't already done so. I firmly believe that a well-engineered userspace bus can do everything applications need it to do, without performance issues. Sure, there will be twice as many context switches if data goes through userspace servers; but the day that becomes the bottleneck, I'll be a happy man, and let kernel folks do their thing.

I'm sorry that skabus in on hold for now. It has been preempted by init and service management stuff. :) But I'll get back to it at some point. What has already been implemented: the pubsub server, the pubsub client library, the RPC mapper server, the RPC mapper client library. What is still missing: the command-line clients, the structured data management library, the high-level interfaces.
I won't be able to come back to skabus right when my service manager is complete, because I will need funds, so I'll look for professional contracts. Once I get enough money to live through another year, I can focus back on this project full-time.
_________________
Laurent
Back to top
View user's profile Send private message
steveL
Watchman
Watchman


Joined: 13 Sep 2006
Posts: 5153
Location: The Peanut Gallery

PostPosted: Sun Jun 14, 2015 4:40 pm    Post subject: Reply with quote

Thanks for sharing your insight and analysis, skarnet. Just what we need :-)
skarnet wrote:
A pubsub mechanism. Processes need a channel to publish information, so that other processes that subscribe to that channel get the information. D-Bus implements this via D-Bus signals.

TIPC does this much better.
Quote:
A RPC mapper. Process want to call functions exported by other processes, and need a way to identify the correct process to send the call to. They also want to export functions that other processes can call. D-Bus implements this via D-Bus methods.

The first part of this is inherent in TIPC.

The second part is logically not part of the transport, but the protocol both ends talk over that transport, along with:
Quote:
Structured data manipulation and higher level access. Native Unix IPCs only handle strings of bytes. (Or, in one case, file descriptors.) Processes want a way to use structured objects in their RPC calls or information publication; they don't want to bother with low-level details such as marshalling.

Nothing wrong with using a lib to do this; we'd usually reach for libpack first, but for this case capnrpc is ideal.
Quote:
High-level access primitives should be made available. D-Bus implements this via bindings, i.e. high-level D-Bus access in different languages.

Once we have a C lib doing the above, this is trivial, since every language can interface with C.

Though TIPC itself already has support for Perl, Python, Ruby, and D, which means it's going to be easy to see what's going on under the hood, from whatever, as well as interface directly within the language of choice.
Quote:
So, why is D-Bus bad? For the exact same reason why systemd is bad: it is poorly engineered.
  • It is too big and overly complex for what it does. I suspect I lost more hair tearing away at it while reading the libdbus documentation than in 10 years of aging.
  • It is overengineered and attempts to do too much: it goes beyond the scope. (Two words: D-Bus activation. lol.)
  • It is monolithic.
But the principle of D-Bus isn't bad. The D-Bus protocol itself, what processes exchange on the wire, isn't bad. What is bad is the engineering of the D-Bus software itself: design and implementation.

The other problem is the uses it's being put to; if you're using it for audio, YDIW afaic, yet we got the supposedly-pro audio usecase as justification ("wouldn't it be nice?" effectively) for kdbust.

Similarly, if you're firing off 5000 events at startup, stop doing that.
Quote:
I really don't understand all this focus on the kernel. Some people absolutely want a bus in the kernel. The kdbus people, of course, and unfortunately they've managed to brainwash Greg KH to their side, but even here - you guys are looking for alternatives, so why all this kernel talk? It is not necessary to implement a bus in the kernel. All that is necessary is a proper, efficient implementation in userspace.

Well, Ted T'so agrees with you.

The kernel side (umux) above comes from notadev looking at implementing an in-kernel mactab; he's not really interested in getting stuck in for such a simple thing alone, since it's a fair bit of work to get into kernel-coding.
But it does tie into notification-chains, and it seems like a good idea both to establish that rt-netlink is around to stay for dhcpcd, and to use BPF to drop the majority of events which are of no interest. That's a more substantial amount of work, which would justify the initial time investment, and would be easier given that mactab is such a simple task (supposedly.)

mv brought up using the same protocol we use for the user-side bus from kernel-side, as well; though we'd mucked about with the text above already, mainly as a way of seeing what TIPC is about, by comparison with X.

In any case, we don't need "to implement a bus in the kernel" for the wider ubus situation; that's the whole point: TIPC is already there, since over a decade.
Quote:
Please read what Linus says about it if you haven't already done so.

That's more about the insanity of how dbus is being used, and really confirms what I'm saying here, imo.
You're right: dbus protocol itself is not bad; in fact we quite liked it when it first came out. The problem is how it's being used, and for us, the GPL-evasion issue that has already shown its head with LGPL-GPL combination work; where the GPL part has no conventional API, only what it exports over dbus to an LGPL component, which is the only way to use it.
Frankly, we don't want any part of that, nor indeed any legal exposure to it.

The problem with how it's being used, is down to how it's been promoted as a panacea for everything.
It was fine for a desktop-app-bus, though nowhere near as good as dcop, which was also twice as fast.

Really a review at the beginning should have caught the fact that it was nowhere near as useful as dcop, quite apart from any performance problems.

Do please read up on TIPC, or further discussion is going to be difficult.
That's the homepage link, which I don't think I gave above, which are the more technical links.
Quote:
I firmly believe that a well-engineered userspace bus can do everything applications need it to do, without performance issues. Sure, there will be twice as many context switches if data goes through userspace servers; but the day that becomes the bottleneck, I'll be a happy man, and let kernel folks do their thing.

Feel free to write one, then as you appear to be aiming to do; personally I feel it's a waste of time if you try to do the same thing as dbust, and use it inappropriately.
I think you'll end up with the same bottlenecks, or it won't be the same at all: you'll in fact be using several mechanisms, or it will bottleneck, or you won't be using it in the same way (eg for bulk data with reliability and no-disconnect, as well as/or for many tiny event notifications back and forth.)
Quote:
I'm sorry that skabus in on hold for now. It has been preempted by init and service management stuff. :) But I'll get back to it at some point. What has already been implemented: the pubsub server, the pubsub client library, the RPC mapper server, the RPC mapper client library.

You should take a look at AF_TIPC for the first two, when you get round to it, if you don't mind suggestions.

Good luck with whatever :-)
Back to top
View user's profile Send private message
szatox
Veteran
Veteran


Joined: 27 Aug 2013
Posts: 1717

PostPosted: Sun Jun 14, 2015 6:15 pm    Post subject: Reply with quote

Thanks SteveL, I will have a look at some guide on this matter. You're rigth I didn't consider scheduling because I assumed data for each destination goes to a different buffer and at this point it doesn't behave any differently than with multiple independent channels (you can populate multiple buffers and read them in different order). Who knows, maybe I'll find some interesting notes there :)
In the meantime, skarnet reminded me another question I knew I wanted to ask but couldn't recall right then:

Why do we actually NEED any kind of BUS application at all in the first place?

Like, what is wrong with unix sockets for example?
If you want your application to talk to another one, you do know what you want to talk to, right? You know application name and protocol. And really, "RPC mapper that let's you identify the correct process" is hardly an excuse. Whatever way you use, you must hardcode some information that will let you identify that process, so what makes hardcoding small-talk to bus better than hardcoding /run/application/socket instead? And if you keep those sockets in a common location, you will even be able to enumerate them with ls ;)

Manipulating structured data is not something that requires bus either. What you actually need here is a set of functions doing serialization and deserialization for you. Bindings.

That basicaly leaves us broadcasting events. Hmm... Doesn't inotify do that already?
Back to top
View user's profile Send private message
augustin
Guru
Guru


Joined: 23 Feb 2015
Posts: 318

PostPosted: Wed Jun 17, 2015 10:09 am    Post subject: Reply with quote

steveL wrote:
Thanks augustin; edited to add a bit of explanation.

HTH.


Thank you, Steve. It's useful.
I'm still following closely, learning from you all.
Thanks.
Back to top
View user's profile Send private message
tclover
Guru
Guru


Joined: 10 Apr 2011
Posts: 516

PostPosted: Wed Jun 17, 2015 12:29 pm    Post subject: Reply with quote

to @SteeveL & @skarnet and @other
from @someboby-who-briefly-used-Dbus

First, TPIC was designed to be almost a general purpose and efficient IPC. I am saying almost, because, the author lists some assumptions from where TIPC comes from that put some sort of limitations. (See TIPC specs for more info.) So, the one/multi-to-one/multi communcation with packet size up to ~66KiB (IIRC) is implemented in a low latency way to satisfy (fast/big) cluster requirements.

That's for the intro. So, the transport side of the IPC should be marked resolved--this is satisfy point #1: publication-subscription mechanism.

Now, the *general* of the supposedly "general purpose IPC" requires a user space library to satisfy the RPC and Structured-Data. TIPC does not impose any requirement of the data side, it does the transport/publication/subscription and maintenance of the buses and well to satisfy the latency requirement in network like way.

Indeed a client can use methods to make (function) calls to other clients, or use general D-Bus methods to get/set properties to other clients. And this method nature of D-Bus permits license evasion. So, some care should be taken in respect to how/why/what to implement RPC/structured-data in the user space library.

I don't know any other IPC that implement a similar RPC/structured-data mechanisms... I should take a look in capnrpc when I have some free times.
_________________
home/:mkinitramfs-ll/:supervision/:e-gtk-theme/:overlay/
Back to top
View user's profile Send private message
steveL
Watchman
Watchman


Joined: 13 Sep 2006
Posts: 5153
Location: The Peanut Gallery

PostPosted: Wed Jun 17, 2015 1:51 pm    Post subject: Reply with quote

This is kind of a scattergun post, talking out the options.
szatox wrote:
Why do we actually NEED any kind of BUS application at all in the first place?

Like, what is wrong with unix sockets for example?
If you want your application to talk to another one, you do know what you want to talk to, right? You know application name and protocol. And really, "RPC mapper that let's you identify the correct process" is hardly an excuse. Whatever way you use, you must hardcode some information that will let you identify that process, so what makes hardcoding small-talk to bus better than hardcoding /run/application/socket instead? And if you keep those sockets in a common location, you will even be able to enumerate them with ls ;)

If you want publish/subscribe functionality, where a process can fire off an event notification, without knowing who's going to be interested, though it can be many other processes, and does not need to hang around once the message is sent, ie it can exit if need be, then you really need a bus.
Either a userland mux, say from a message queue (if you want reliability), or we might as well use TIPC since it's been around for precisely this purpose, since the late 90s. (Most of the interesting work on realtime and threading was done in the early 90s, and mostly we've had reinvention, or perhaps relearning, since.)
Though I also seem to recall X has a protocol for inter-app communication; another option is simply to fork X, drop permissions and run another process solely to mux desktop events. (It's clearly quick enough at multiplexing, or our desktops would be kludge.)

More commonly you don't actually want to be on that bus, and you want to do what you're saying, ie talk directly to another process/service: ie IPC.
In the case of audio or video, you have dedicated channels (as simple as a pipe, as complex as you need) for the purpose, and putting it on the bus makes no sense whatsoever.

So the "desktop bus" case makes sense; it's within a user session, and it's for apps within that session, to be able to notify each other of what's happening, firstly, and secondly to make "object requests" without knowing who's providing the service. (Perhaps that's why X has the inter-app protocol.)
Overloading it with other things "because we can" is typical newbie design: when all you know is a hammer, everything looks like a nail.

I agree with your overall point; it's better to use a specific channel, and let the kernel multiplex, which is and always has been its job, a natural extension of multiplexing the CPU and scheduling processes only when they have something to do (select/poll/epoll are designed for that purpose, though the original case is a process waiting on a file access. See Lions for more in-depth background, and Bach for a good overview of the original design.) Certainly for any real work.

Note that the definition of an ORB is all about communication over a network, and is effectively a respin of the whole RPC approach. Take a look at this paper (linked from Wikipedia CORBA); imo it outlines, why exactly RPC fell out of fashion: because you cannot simply paper over the differences between remote and local invocation, though from the programmer's pov both are exactly the same within RPC (and indeed, dbus.)
(Before anyone brings up AJAX, the programming model for an XMLHttpRequest makes it very clear that you're dealing with a remote.)

The "localised RPC" or "local ORB" case is the "middle ground" discussed at the end of that paper.

If we step back though, it seems to make a lot more sense, simply to provide a library for whatever thing it is you want to do, using a stable ABI independent of any implementation, exactly like LADSPA. (I love the fact that it's only had one change in version, 5 years ago or sth. That is how it's meant to be done.)
Nine times out of ten, it's going to be fine simply to make a direct function call, eg to the DE's implementation of whatever, and you don't need any of this malarkey at all. The other time, it's better to call on a service as you describe.

The other aspect would be to allow the admin to tweak what mechanism is going to be used when it is a service; though really this starts to feel a lot like a database abstraction layer in web-coding, where we provide a config option that tells the site how to access the database, rather than hardcoding it.
Not sure if it's needed, but that'd be the "easy-IPC" lib, so eg telling fubar to use a named socket, rather than a TCP port, is done in one place, and everything else knows that's how to contact fubar, without recompilation, because they ask for 'fubar' and the connection is handled by the libcode.
This ties into naming under TIPC as well, which I need to read more about. ATM I have some vague sense that it's about a port range.
Quote:
Manipulating structured data is not something that requires bus either. What you actually need here is a set of functions doing serialization and deserialization for you. Bindings.

"Bindings" only comes into play as a term, when you have a specific protocol and are tying it to a specific language; hence a "ruby binding for dbust". OFC the usual case is to do a C version, since C is supported everywhere, and let the higher-level languages use that (since it's machine-code, effectively.)

Wrt serialization and deserialization take a look at libpack for general usage; it's basically the stuff from "Practice of Programming" on steroids.
Though if we want to do RPC, capnrpc makes sense (and there are indeed bindings to different languages for it, but we'd need a C impl too.)
Quote:
That basicaly leaves us broadcasting events. Hmm... Doesn't inotify do that already?

Heh, I'm actually in favour of inotify for things like systray notification of how many emails we have (it was brought up on some linux forum and discussed in another thread here.) Just have a small daemon (in the classic sense of a helper process that waits around) that sleeps on notification from your ~/Mail and writes one summary file of it, that others can watch. (Only wrinkle would be standard one that a reduction in number of new mails is not an event; it just means the user has read them.)

OFC for most of us, that's our email client; I certainly don't see the point in it, since I've found it better to confine email to specific times when I'm ready to deal with it. Nice for a phone, though, and in any event, ours not to reason why, ours just to provide mechanism.

Nevertheless, while "what" is not our concern or decision, "how" most certainly is.

If we must provide pub/sub semantics, let's use the existing kernel infrastructure designed for the purpose, if we can't get away with just using X as designed.
If we must provide RPC, instead of a simple ABI, let's use the kernel developers' recommendation for RPC, capnrpc.

But most of all, let's not reinvent the wheel, and shove things down the bus just because we can; only when that's the most elegant approach.
And fgs, if we do have a "desktop bus" let's at least make it as nice to work with for the end-user as dcop was, or wtf are we doing?
Back to top
View user's profile Send private message
steveL
Watchman
Watchman


Joined: 13 Sep 2006
Posts: 5153
Location: The Peanut Gallery

PostPosted: Wed Jun 17, 2015 2:57 pm    Post subject: Reply with quote

skarnet wrote:
Native Unix IPCs only handle strings of bytes. (Or, in one case, file descriptors.)

This is incorrect; streams-based protocols like TCP operate as you say, but UDP (SOCK_DGRAM) operates on a message-basis (where datagram is message.)
So do message-queues ofc; AF_UNIX can like IP, operate in either stream or datagram mode, and that is ofc reliable unlike UDP, and we don't have to account for MTU/MSS as with IP.

Not saying that you don't also need marshalling etc (to know what the data means: a protocol over the transport); just needed to point out that messages are available (and less work.)

SCTP is message-oriented and reliable, but that's by the by since we're discussing local-machine. However you should take a look at UNP vol 1 if you're interested, and vol 2 for IPC. Have to say I like the look of SCTP a lot; one-to-many is pretty much how you're meant to use it, though you can also use it for one-to-one very similarly to TCP, and it has notifications, and heartbeat by default. You can also use an unordered message stream, and independent streams (eg: downloading 3 images) do not delay each other. (UNP vol1: chapters 9, 10 and 23.)
For our usage, TIPC is designed-to-purpose, and we'd be mad not to take advantage of all that work and experience, assuming we want pub/sub.
Back to top
View user's profile Send private message
steveL
Watchman
Watchman


Joined: 13 Sep 2006
Posts: 5153
Location: The Peanut Gallery

PostPosted: Wed Jun 17, 2015 3:30 pm    Post subject: Reply with quote

tclover wrote:
First, TPIC was designed to be almost a general purpose and efficient IPC. I am saying almost, because, the author lists some assumptions from where TIPC comes from that put some sort of limitations. (See TIPC specs for more info.)

Those are also documented above in my second post ("copying in some notes"), and as we can see those assumptions simply are not an issue for local-machine, nor indeed the clustered use-case. In fact the assumptions, based on the restricted scenario, are what enable TIPC to be so fast. They make a lot of sense, iow; there's no point pretending we need to deal with a dodgy WAN-link to the other side of the world, especially when that means we don't need to add an awful lot of code to do so.

So I'd say there's no "almost" about it; it is indeed "a general-purpose and efficient IPC" (which is not the same as RPC.)
Quote:
That's for the intro. So, the transport side of the IPC should be marked resolved--this is satisfy point #1: publication-subscription mechanism.

Now, the *general* of the supposedly "general purpose IPC" requires a user space library to satisfy the RPC and Structured-Data. TIPC does not impose any requirement of the data side, it does the transport/publication/subscription and maintenance of the buses and well to satisfy the latency requirement in network like way.

Indeed a client can use methods to make (function) calls to other clients, or use general D-Bus methods to get/set properties to other clients. And this method nature of D-Bus permits license evasion. So, some care should be taken in respect to how/why/what to implement RPC/structured-data in the user space library.

Agreed. You brought up the "functional interface has same legal effect as linkage" point in one of the other threads, and it must be addressed, both for legal certainty and simple morality, or doing the right thing.
Quote:
I don't know any other IPC that implement a similar RPC/structured-data mechanisms... I should take a look in capnrpc when I have some free times.

Have a read of that paper; it seems to me that once you replace function calls with "localised" RPC, you have the same set of problems wrt failure that you do in the wider-network case.
Though perhaps the "promise pipelining" (which brings to mind X object ids, for some reason) of capnrpc can mitigate that, so one call is a transaction that succeeds or fails in its entirety, despite being composed (by the caller) of several underlying operations.
Back to top
View user's profile Send private message
skarnet
n00b
n00b


Joined: 01 Jul 2014
Posts: 25
Location: Paris, France

PostPosted: Wed Jun 17, 2015 5:31 pm    Post subject: Reply with quote

steveL wrote:
streams-based protocols like TCP operate as you say, but UDP (SOCK_DGRAM) operates on a message-basis (where datagram is message.)
So do message-queues ofc; AF_UNIX can like IP, operate in either stream or datagram mode, and that is ofc reliable unlike UDP, and we don't have to account for MTU/MSS as with IP.

Yes (and AF_UNIX datagram transport is actually reliable) but my point was that you can only send unstructured data; and once you have your marshalling functions, it becomes easy to encode message boundaries into a stream. I've done it, several times; and the amount of work necessary to send messages over a stream-based transport is peanuts compared to the amount of work needed to marshall your structured data in the first place. So it doesn't really matter whether you use streams or datagrams.
My personal preference goes to streams, because then you can use them as you would use pipes or regular files, with tools that just read or write on a file descriptor; you can even use them in scripts. You may have noticed I'm quite a fan of the "pluggable command-line utilities" model - to me, that's one of the biggest assets of Unix - and generic file descriptors accessible via read()/write() help a lot.

steveL wrote:
SCTP is message-oriented and reliable, but that's by the by since we're discussing local-machine.

Yes, I know SCTP, and I like it. But as you say, the network is a whole other enchilada: nowadays it's irresponsible to do network without thinking cryptography, and all of a sudden the complexity of your project rises up by two orders of magnitude. I'll address the network when I'm done with the local machine, and there's still a lot of work to do here. :)

steveL wrote:
For our usage, TIPC is designed-to-purpose, and we'd be mad not to take advantage of all that work and experience, assuming we want pub/sub.

I'll have a look at the TIPC spec when I get the time. However, I'd like to have something portable - BSD is not dead, and some people pretend to have seen Solaris' gravestone move. So until AF_TIPC gets some wider acceptance, it will only be "yet another non-portable mutant feature of Linux", and can't replace a userland implementation, unless you settle for Linux-only software - which is one of the things people fault systemd for.
_________________
Laurent
Back to top
View user's profile Send private message
tclover
Guru
Guru


Joined: 10 Apr 2011
Posts: 516

PostPosted: Wed Jun 17, 2015 7:48 pm    Post subject: Reply with quote

skarnet wrote:
steveL wrote:
For our usage, TIPC is designed-to-purpose, and we'd be mad not to take advantage of all that work and experience, assuming we want pub/sub.

I'll have a look at the TIPC spec when I get the time. However, I'd like to have something portable - BSD is not dead, and some people pretend to have seen Solaris' gravestone move. So until AF_TIPC gets some wider acceptance, it will only be "yet another non-portable mutant feature of Linux", and can't replace a userland implementation, unless you settle for Linux-only software - which is one of the things people fault systemd for.


This should not be an issue, at least legally. The code is solid and really used in cluster environment. And everything is dual licensed GPLv2/2-clause BSD. Still, I have no clue on any usage on the BSD OS usage side.
_________________
home/:mkinitramfs-ll/:supervision/:e-gtk-theme/:overlay/
Back to top
View user's profile Send private message
steveL
Watchman
Watchman


Joined: 13 Sep 2006
Posts: 5153
Location: The Peanut Gallery

PostPosted: Sat Jun 20, 2015 3:16 pm    Post subject: Reply with quote

steveL wrote:
streams-based protocols like TCP operate as you say, but UDP (SOCK_DGRAM) operates on a message-basis (where datagram is message.)
So do message-queues ofc; AF_UNIX can like IP, operate in either stream or datagram mode, and that is ofc reliable unlike UDP, and we don't have to account for MTU/MSS as with IP.

skarnet wrote:
Yes (and AF_UNIX datagram transport is actually reliable)

As pointed out in the bit you quoted
Quote:
but my point was that you can only send unstructured data;

That's true of any transport: the whole point is to keep transport and protocol separate, as much as feasible.
Quote:
and once you have your marshalling functions, it becomes easy to encode message boundaries into a stream. I've done it, several times; and the amount of work necessary to send messages over a stream-based transport is peanuts compared to the amount of work needed to marshall your structured data in the first place. So it doesn't really matter whether you use streams or datagrams.

At a conceptual level, ofc not, since you use an API to handle it for you; which I think is what's under discussion here.

However, what you're saying gives a lot of weight to simply using a human-readable, text-based format; it's not going to add any algorithmic complexity. O(N) will still be O(N) and O(log N) similarly.
In fact those routines are some of the most heavily-optimised of all, precisely because they're so central, and core stdlib.

That makes everything much easier to debug, and any language, including shell, can use its native handling to deal with it.
Quote:
My personal preference goes to streams, because then you can use them as you would use pipes or regular files, with tools that just read or write on a file descriptor; you can even use them in scripts. You may have noticed I'm quite a fan of the "pluggable command-line utilities" model - to me, that's one of the biggest assets of Unix - and generic file descriptors accessible via read()/write() help a lot.

Indeed; but one size doesn't fit all, which is precisely why POSIX gives us several components we can use for IPC.

There's no need to restrict ourselves to only one method, as tempting as that may be. As well decide that since TCP does it all, there's no point in supporting UDP.
steveL wrote:
For our usage, TIPC is designed-to-purpose, and we'd be mad not to take advantage of all that work and experience, assuming we want pub/sub.

Quote:
I'll have a look at the TIPC spec when I get the time. However, I'd like to have something portable - BSD is not dead, and some people pretend to have seen Solaris' gravestone move. So until AF_TIPC gets some wider acceptance, it will only be "yet another non-portable mutant feature of Linux", and can't replace a userland implementation, unless you settle for Linux-only software - which is one of the things people fault systemd for.

*sigh* if you'd at least glanced at the homepage, you'd have read:
Quote:
The TIPC project currently provides an open source implementation of TIPC for Linux, Solaris, and VxWorks.

Forgive me for venting, but that's quite annoying after you've asked me to look at a url we've already discussed in the other thread, and I've said: "Do please read up on TIPC, or further discussion is going to be difficult."
Nothing major, but I thought I'd express that now, rather than let it come out some other way.

On the wider "userland is portable" point, by all means implement the same ABI on BSD. The point however, is to have the same high-level ABI (and I mean ABI guarantee, not just an API) everywhere, but use whatever platform-specific code we need to interface with the system. So BSD are going to use kqueues where Linux uses eventfds; that's fine and to be expected. It's not a portability issue, since the portability is about the higher-level ABI, along the lines of LADSPA.

Would be good to see what you'd like to spec there.
Back to top
View user's profile Send private message
geki
Advocate
Advocate


Joined: 13 May 2004
Posts: 2324
Location: Germania

PostPosted: Sat Aug 01, 2015 12:04 pm    Post subject: Reply with quote

if it was 10 years ago, I would have liked to join. sad to see it pass by. :?

[OT] (directed at red hat works - and its causes)
all these over-engineered works wrapling and entangling all and nothing nowadays.
must be great to waste manpower - one-selfs and others and third-parties. o well, ...
[/OT]

anyway, I hope to see an implementation one day.
and wish anyone trying to implement this the best
and strong-will(a well, proper verb?) and resistance to the beating that will happen. ;)
(who knows who will come around and join the discussion about the implementation then - knowing the past.)

edit hopefully clarified a bit
_________________
boost|select libs to build|slotable|python-buildid

hear hear


Last edited by geki on Sat Aug 01, 2015 3:23 pm; edited 1 time in total
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Gentoo Chat All times are GMT
Goto page 1, 2  Next
Page 1 of 2

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum