You seem to want pipes to bend over backwards to solve each and every one of your applications' problems.
> The don't solve race conditions in peers trying to locate each other (surprisingly difficult).
Not even sure what you mean here. Are you talking about peer discovery? Because DBus won't help you there either--peers have to be aware of each other's DBus object paths before they can rendezvous. Similarly, two peers need to know where the common pipe is to rendezvous.
> They don't solve a standardized marshaling format.
Nor should they. There are a ton of ways to skin this cat in userspace, depending on what your application needs. Protobufs come to mind, for example, but there are others.
Why do you want the pipe to enforce a particular marshaling format? Does the pipe know what's best for every single application that will ever use it?
> They don't come with an implementation to integrate with main loops for event polling.
It's not the kernel's responsibility to implement the application's main loop. That's what libevent and friends are for today, if you need them.
> They have an inherent vulnerability in FD passing where you can cause the peer to lock up.
Last I checked, you pass file descriptors via UNIX sockets, not pipes.
> They don't handle authentication (well, sort of).
Depends on your application's threat model. The kernel provides some basic primitives that can be used to address common security-related problems (capabilities, permission bits, users, groups, and ACLs). If they're not enough, you're free to perform whatever authentication you need in userspace to secure your application against your threat model's adversaries.
It is unreasonable to expect the pipe to be aware of every single threat model an applications expects, especially since they change over time.
> You can get into deadlock situations in your messaging code if you aren't really careful about message sizes and when you order poll in/poll out.
It's not the pipe's fault if you don't use it correctly.
> They aren't introspect-able to see what the peer supports.
Peer A could use the pipe to ask peer B what it can do for peer A. Why do you want the pipe to do peer B's job?
> They make it super easy to not maintain ABI.
Nor does DBus. Nothing stops an application from willy-nilly changing the data it serves back.
> Not even sure what you mean here. Are you talking about peer discovery? Because DBus won't help you there either--peers have to be aware of each other's DBus object paths before they can rendezvous. Similarly, two peers need to know where the common pipe is to rendezvous.
I suspect he was referring to socket activation and how that simplifies these kinds of messes.
> Nor should they. There are a ton of ways to skin this cat in userspace, depending on what your application needs. Protobufs come to mind, for example, but there are others.
Right... so that's exactly what systemd did. It used Dbus, which provides that standard serialization format. Not my favourite format, but very well established and tested and focused on systemd's problem domain.
The point is, in order to have loose coupling between components, something like unix pipes is just a starting point.
> Does the pipe know what's best for every single application that will ever use it?
Ah, now I understand the problem with systemd. I never realized it was trying to take over every application's communications protocol! ; -)
Seriously, I think it is perfectly reasonable (and necessary) to define a standard protocol for system even notifications... I say this because it has already been done... by the standard compoents like udev & dbus that systemd is building on top of...
> Last I checked, you pass file descriptors via UNIX sockets, not pipes.
Correct. People have a tendency to mess up their semantics though. If the original poster wasn't referring to unix domain sockets, than it is an even sillier question.
> It is unreasonable to expect the pipe to be aware of every single threat model an applications expects, especially since they change over time.
Yes, but you do need something more sophisticated than a pipe to manage secure communications between your systems components.
> Nor does DBus. Nothing stops an application from willy-nilly changing the data it serves back.
? D-Bus will drop you like a hot potato the moment you fire off invalid messages. You could send valid messages with fraudlent/misleading data payloads I guess, but at least a whole host of problems are addressed by tightening that up.
> The point is, in order to have loose coupling between components, something like unix pipes is just a starting point.
It's also an ending point. If each application gets to define its own IPC primitives, then there become as many app-to-app communication protocols as there are app-to-app pairs. This does not make for a loosely-coupled ecosystem.
> Seriously, I think it is perfectly reasonable (and necessary) to define a standard protocol for system even notifications... I say this because it has already been done... by the standard compoents like udev & dbus that systemd is building on top of...
This is circular reasoning. You're saying "we should use systemd's notification protocol, because systemd uses it." This says nothing about the technical merits of its protocol.
> Yes, but you do need something more sophisticated than a pipe to manage secure communications between your systems components.
Um, data within a pipe is visible to only the endpoint processes, the root user (via procfs and /dev/kmem), and the kernel. If you don't trust an endpoint, you should stop communicating with it. If you don't trust the root user or the kernel, you can't really do anything securely at all in the first place. My point is, data within a pipe is about as secure as it's going to get.
> You could send valid messages with fraudlent/misleading data payloads I guess, but at least a whole host of problems are addressed by tightening that up.
I think you will find that the bulk of IPC problems will come from dealing with data you didn't expect--that is, processes sending "fraudlent/misleading" data and other processes acting on it. DBus won't help you there--you always always ALWAYS have to validate data you receive from untrusted parties, no matter what transport or wire format you're using.
> t's also an ending point. If each application gets to define its own IPC primitives, then there become as many app-to-app communication protocols as there are app-to-app pairs. This does not make for a loosely-coupled ecosystem.
That's exactly why you need a more systemic approach to the IPC mechanism...
You can pretend that "oh this is just a stream so there isn't tight coupling", but the information that is communicated is the same. If you haven't imposed some structure and consistency to it, that's exactly how you end up with a ball of mud.
> This is circular reasoning. You're saying "we should use systemd's notification protocol, because systemd uses it." This says nothing about the technical merits of its protocol.
You misunderstood my point. I'm not justifying it on the basis that systemd is using it. I'm saying the fact that all the other systems have arrived at a similar, and in many cases the exact same mechanism, is pretty strong evidence that it is a reasonable design choice.
Basically all the Linux systems out there are already using udev & dbus. Most of the non-Linux systems do as well. Everyone's done it and made it work. That systemd is adopting arguably the most entrenched one in the Linux sphere is hardly as controversial as people seem to think it is.
> My point is, data within a pipe is about as secure as it's going to get.
I wasn't trying to suggest it wasn't a secure point-to-point communication mechanism (it has issues, but fine enough). The issue is that you need more integration in with the security model to avoid having a rats nest of security logic on top of it.
> I think you will find that the bulk of IPC problems will come from dealing with data you didn't expect--that is, processes sending "fraudlent/misleading" data and other processes acting on it.
There's a very long and glorious history of malformed and misleading IPC causing problems. Not that it is the only thing, but life becomes a lot easier when that problem is off the table.
> DBus won't help you there--you always always ALWAYS have to validate data you receive from untrusted parties, no matter what transport or wire format you're using.
Yes you will. However, DBus ensures that you don't have to write a ton of redundant code just verifying you are getting validly structured data and dealing with the nasty ways someone might try to exploit that.
Imagine having to write a secure REST service where the only thing you had to worry about in the entire network protocol stack was the validity of the data expressed in the payload.
> That's exactly why you need a more systemic approach to the IPC mechanism... You can pretend that "oh this is just a stream so there isn't tight coupling", but the information that is communicated is the same. If you haven't imposed some structure and consistency to it, that's exactly how you end up with a ball of mud.
So, you basically want to turn IPC into CORBA. It's a bad idea to have the OS impose too much structure on your IPC, in the same way that it's a bad idea to have the base class in an object hierarchy try to take on too many subclass-specific responsibilities. This is because over-specialization of a component needlessly constrains the designs of systems that use it.
That said, you are correct in that byte streams alone do not make for loosely coupled systems. Programs must additionally emit data such that other unrelated programs can operate on it without modification. But we already have this universally-parsable data format: it's called human-readable text. It's why you can "grep" and "awk" and "sed" the outputs of "ls" and "find" and "cat", for example.
Take a second and imagine what the world would be like if you had to write "grep" such that it had to be specifically designed to interact with "find," instead of simply expecting a stream of human-readable text. Imagine if "awk" had to be specifically designed to interact with "ls." This is the world that CORBA-like IPC creates, where programs not only need to be intrinsically aware of the higher-level RPC methods each other program exposes, but also intrinsically aware of the access and consistency semantics that go along with it. No thank you; I'll stick with pipes and human-readable text, where the data format, data access, and consistency semantics are universally applicable.
> Basically all the Linux systems out there are already using udev & dbus. Most of the non-Linux systems do as well. Everyone's done it and made it work. That systemd is adopting arguably the most entrenched one in the Linux sphere is hardly as controversial as people seem to think it is.
First, udev is Linux-specific--it uses netlink sockets to listen for Linux-specific hardware events. Second, not everyone uses udev and dbus. mdev, smdev, eudev, and static /dev are also widely used and have well-defined use-cases that udev does not serve, and plenty of servers (and even my laptop) get along just fine without dbus.
Trying to justify udev and dbus because "everyone uses them so you should too!" is not only an example of the bandwagon logical fallacy, but also reveals your ignorance of and insensitivity to other users' requirements.
> The issue is that you need more integration in with the security model to avoid having a rats nest of security logic on top of it.
I said it above and I'll say it again here. The IPC layer does not know and cannot anticipate the security needs of every single application. If you try to design your IPC system to do this, you will fail to encompass every possible case. This is because threat models are not only specific to the application, but also specific to the context in which the application runs.
For example, you do not send your bank account password over an out-bound socket unless it has first been encrypted using a secret key known only to you and your bank. Your reasoning implies that the IPC system should be tasked with automatically enforcing this constraint, among others. Nevermind the fact that the IPC system will see only the ciphertext and thus will not know that the data it's about to send contains your password.
> However, DBus ensures that you don't have to write a ton of redundant code just verifying you are getting validly structured data and dealing with the nasty ways someone might try to exploit that.
So do plenty of stub RPC compilers and serialization libraries that have been around longer, are more widely used, and are better tested than DBus. However, neither DBus nor any of these solutions will help you with well-formatted but invalid data. Your application has to deal with that, since the validity of data is both application-specific and context-specific (so, not something the IPC system can anticipate).
Again, what's so special about DBus, besides the fact that it's the New Shiny?
> Imagine having to write a secure REST service where the only thing you had to worry about in the entire network protocol stack was the validity of the data expressed in the payload.
If I'm writing a secure service of any kind, you can be damn certain I'm thinking about a LOT more than input validation! Security encompasses waaaaaaaaaay more than that.
Even if security wasn't an issue, there is still a LOT more to worry about than input validation. Things like scalability, fault tolerance, concurrency, and consistency come to mind. There is no silver bullet for any of these, let alone an IPC system that solves them all at once!
> First, udev is Linux-specific--it uses netlink sockets to listen for Linux-specific hardware events.
I was thinking about this comment, and I realized this is probably the source of most of your angst, which leaves some great solutions on the table.
systemd isn't really creating a much more significant break with the systems you like, because it's building on top of what Linux, which for the most part has already made the break.
The problem is, projects like GNOME, which have software you want to use, are integrating more tightly with Linux and specifically bits of systemd.
I think the obvious solution is a bridge/better interface. The contracts that GNOME is going to rely upon are at least going to be pretty well defined, and if you've got another system that works better, it shouldn't be hard for it to provide an equivalent, even compatible, interface.
If it really is demonstrably better, GNOME and other projects will likely adopt your interface/abstraction, and systemd will end up having to communicate through your interface. Even if they don't, it is a comparatively simpler effort for a software community to support a relatively small set of touch points that they want GNOME to be aware of, and maintaining a fork or compatibility layer is a perfectly reasonable solution (indeed, BSD already does this for Linux runtimes).
I can understand why it'd not be a perfect solution from your perspective, but if a bunch of developers contributing to a work you care about are going a direction you don't like, it's about as good an outcome as one could hope for.
> I was thinking about this comment, and I realized this is probably the source of most of your angst
No, what gives me the most angst is the arrogance of a certain segment of Linux+systemd users who think that just because they can apt-get install systemd and write some minimal unit files for some trivial services somehow makes them domain experts on OS design. And these people seem to think that other users' requirements don't matter, since if they're not using systemd too, they're clearly doing it wrong.
That's a lot of angst for someone that is clearly going to crash and burn in short order and has zero impact on the design and architecture of the systems you and I work with...
This is a pretty good example of how a lot of the contempt for systemd seems to stem from one or both of ignorance of how systemd works and/or ignorance about what a good solution might look like: https://plus.google.com/+LennartPoetteringTheOneAndOnly/post...
No, very much not, because we don't really need an RPC mechanism here. We want something in the event/messaging space.
But it isn't even a want. If you don't have it, you end up with each of the your components actually being very tightly coupled to all the other components it talks to and you've got a truly monolithic mess on your hands.
> It's a bad idea to have the OS impose too much structure on your IPC, in the same way that it's a bad idea to have the base class in an object hierarchy try to take on too many subclass-specific responsibilities. This is because over-specialization of a component needlessly constrains the designs of systems that use it.
This isn't exactly a new concept or a new problem. There are plenty of existing cases where this is happening (basically every platform I can think of right this moment, though I'm sure there are plenty of exceptions), including in the current Linux udev mechanism.
> But we already have this universally-parsable data format: it's called human-readable text. It's why you can "grep" and "awk" and "sed" the outputs of "ls" and "find" and "cat", for example.
/me falls out of chair.
Yeah, that's worked out great. Never had a problem with init scripts not extracting the right column or handling a new variant in how the output comes out (or even better still, the dreaded "value with embedded whitespace").
But you know what? DBus is basically human readible with a bit more imposed structure than generic streams. So, I think you are arguing in support of the systemd approach without realizing it! ;-)
> First, udev is Linux-specific--it uses netlink sockets to listen for Linux-specific hardware events. Second, not everyone uses udev and dbus. mdev, smdev, eudev, and static /dev are also widely used and have well-defined use-cases that udev does not serve, and plenty of servers (and even my laptop) get along just fine without dbus.
Very true. I was speaking in generalities. Point being, udev is out there and very thoroughly established as something that people seem to generally want.
> The IPC layer does not know and cannot anticipate the security needs of every single application. If you try to design your IPC system to do this, you will fail to encompass every possible case.
You have some ambitious notions for the systemd project that go well beyond the goals that critics say are overly broad in scope. This is for addressing a relatively narrow set of problems that wouldn't even come close to defining 1% of IPC on a LInux system. I'm not suggesting we replace the entire Unix toolset with a complete new set of interfaces and programs (nor are the systemd guys). This is specifically for managing the interactions between devices & daemons... It's a well established problem domain with some well established roles & responsibilities and some pretty well understood data message/event structures.
While it might use systemd/dbus/whatever to get notifications about various services and system events, YOUR BANKING SOFTWARE IS NOT SUPPOSED TO USE SYSTEMD TO MOVE MONEY BETWEEN YOUR ACCOUNTS!
> Again, what's so special about DBus, besides the fact that it's the New Shiny?
DBus isn't the new shiny. It's the old shiny. The new shiny would probably be 0mq or some of the new datagram protocols that people are experimenting with, along with various extensible binary protocols like MessagePack and Cap'nProto.
What's special about DBus is that it is already being used very broadly on Unix platforms for this kind of function and is well integrated in to the system security model. The one bit of additional coolness it brings to the table is the support for socket activation, which simplifies the complexity of start ordering and discovery tremendously, which is indeed a VERY nice benefit, but could no doubt have been NIH'd independently.
> If I'm writing a secure service of any kind, you can be damn certain I'm thinking about a LOT more than input validation! Security encompasses waaaaaaaaaay more than that.
Yes, but the point is one derives substantial benefit from trusted components in the system that take care of their part of the problem. You don't benefit from having to reimplement an entire security apparatus with each component. This is basic security compartmentalization 101.
> Things like scalability, fault tolerance, concurrency, and consistency come to mind. There is no silver bullet for any of these, let alone an IPC system that solves them all at once!
You appear to be simultaneously claiming there is no silver bullet and being terribly upset that systemd isn't one.
Yes, it is no silver bullet. It's not even the huge sea change that some people seem to think it is. Rather, it is an incremental improvement over existing practices that gets rid of a bit more of the cruft and stupidity in the existing infrastructure. Doing that kind of thing right can really make a big difference for the system as a whole, but it isn't the apocalypse.
Somehow you think I'm talking about systemd and init scripts and the things they do. I'm not. The original question I replied to was about why pipes (or any OS-level IPC) shouldn't try to solve application-level problems.
My arguments are that the OS's IPC should not enforce an IPC record structure, but should enforce a consistent set of IPC access methods (i.e. pipes, sockets, shared memory, message queues, etc.) defined independently of applications. I think we're in agreement about the latter--if the OS were to let each application have it's own IPC access methods, then there would be as many access methods as there are applications (leading to tightly-coupled "truly monolithic mess").
I don't think we've reached agreement on the former. I claimed that there is no "best record structure" for all applications, so the OS shouldn't try to enforce one. I also mentioned that human-readable text is the universal data format, which is both a manifestation of this principle (i.e. the OS imposes no constraints on the structure of bytes passed between programs) and a desirable outcome since parsing text is super-simple to implement (by contrast, take a look at the examples in dbus-send(1) to see how painful the alternative can be). You disagree--you think the IPC system should also handle things like serialization and validation.
The problem is that serialization and validation are both application-specific (and even context-specific) concerns, and for the IPC system to address them, it has to gain knowledge from the application. But this lets the application set IPC access methods, which we've already agreed is a bad idea! My (extreme) example to prove this point was that pushing validation responsibility from the application into the IPC system would require it to handle ridiculous application-specific corner cases, like defining a socket class that makes sure that your bank account password won't be sent to the wrong host (still not sure how you concluded that that remark was about systemd). The point is, if you want your IPC system to handle validation for you, you're just asking for trouble.
The same type of problem occurs when you put serialization into the IPC system. The serializer has to know whether or not a string of bytes represents a valid application-defined record. If you make serialization the IPC system's responsibility, it needs application-level knowledge on whether or not an inbound message represents a valid message (which also leads to ridiculous corner cases).
DBus not only enforces structured records (bad), but also lets applications define their own IPC access methods (worse). The RPC-like nature of DBus means that both peers must not only agree on the interpretation of bytes in advance, but also agree on the semantics of accessing them. Unlike reading from a pipe, accessing the value of a DBus object by name can have arbitrary side-effects which the requester must be aware of. In the limit, this puts us into the undesirable situation of having each application-to-application pair agree on an IPC access method, leading to the tight coupling nightmare.
Don't get me wrong--DBus has its use-cases. OS-level IPC isn't one of them. I wish systemd folks took some time to think about this, but they're too busy trying to make DBus into OS-level IPC with no regards to the consequences. See kdbus and the SOCK_BUS socket class it exports.
Now, nitpicks:
> But you know what? DBus is basically human readible with a bit more imposed structure than generic streams
/me falls out of chair too.
Now you're just being daft :) The more structure you impose on bytes, the less human-readable it gets. For example, I don't think I have to explain to you why this comment is more legible as rendered in your browser (unstructured text) than as raw HTML (structured records).
> The original question I replied to was about why pipes (or any OS-level IPC) shouldn't try to solve application-level problems.
That maybe what you read, but the context of drdaemon's statement was specifically in response to a question about communications with the init daemon, and of course everything I said after was as well... Glad we got that settled.
> Not the IPC system's responsibility.
Hmm... IPC systems need to have ways of matching up the parties in a conversation, and having one where you don't have to enforce who calls whom first and parties don't have to mutually agree upon the specific endpoints in advance sure seems like something an IPC system might want to have... particularly one employed in an init system...
There absolutely is a ton of overlap between what systemd does with socket activation and what Xinetd has evolved to... but as with evenone else doing OS design, there comes a point where you leave Xinetd behind and let the full potential of that trick work in your favour.
Don't get me wrong, I think a lot of the Wikipedians are pretty daft, but they are as reasonable a judge of human readability as I can imagine, given what they do.
> CORBA is the old shiny ;)
CORBA is the old shiny-my-god-we-dont-need-nearly-all-of-that-and-it-really-benefits-a-bootstrapped-systems-so-there-is-a-chicken-and-egg-prolem-here. But yeah, close. I don't think anyone has seriously considered that since the OS/2 & Workplace OS days... and even then.
That said, I would say that THESE DAYS (unlike in its heyday), CORBA is a pretty awesome robust, feature rich _general purpose_ distributed IPC system.
> Of course--you use a library and an RPC stub generator for this.
Ah, so it is much more modularized if it runs as an executable piece of code in process than a piece of executable code out of process. Got it. ;-)
> Not really part of the "design principles of IPC" discussion we've got going, though.
Well, that's the discussion you're having. I'm trying to talk about the design constraints and appropriate solutions for the problem domain...
Lennart Poetterring claims that you should use his software instead of someone else's software! I'm SHOCKED! Full story at 11.
Seriously now, did you honestly think that he would say to use xinetd over systemd? Do you honestly believe a developer will advocate the use of a competing piece of software over something (s)he produced?
> There absolutely is a ton of overlap between what systemd does with socket activation and what Xinetd has evolved to... but as with evenone else doing OS design, there comes a point where you leave Xinetd behind and let the full potential of that trick work in your favour.
Unless you don't feel like replacing small, simple, easy-to-use, well-tested xinetd with the 200K-line pile of C code that is systemd.
Besides, I've got your socket activation right here: Start the daemon, have the daemon open a port, and let the kernel swap it to disk. The kernel will swap it back in when it receives a connection for it.
Benefits:
* the daemon preserves state between "activations" for free
* the kernel gives you this feature for free
Security:
* the daemon doesn't have to trust another userspace program with anything
* the daemon can use mlock() to prevent sensitive pages from getting swapped
* if this isn't enough, you can encrypt the swap partition to resist offline attacks
Resources:
* If disk is too expensive, disk is read-only, you have no swap, you have no CAP_IPC_LOCK, the daemon would need to mlock() too much RAM, and you can't encrypt your swap, there's xinetd.
* Need to apply filters or QoS controls on connections before waking up the daemon? That's what the firewall is for.
Trivia:
* You can have xinetd trigger whatever event you want, since all it does is fire up a program and run it. This includes alerting other programs, like a service manager, that it got a connection, and maybe even sending along the message (or the file descriptor) if you want. There is no need for systemd to subsume this responsibility.
As you can see, "socket activation" is by and large a marketing gimmick.
> Me and the folks at Wikipedia:...
You think an article that compares data serialization protocols somehow proves your ludicrous claim that human readable text is less readable than marked-up text? Maybe daft was too nice a word...
> Ah, so it is much more modularized if it runs as an executable piece of code in process than a piece of executable code out of process. Got it. ;-)
Sir/madam, have you ever written an Internet-facing daemon? Obviously the bulk of the RPC logic lives in a shared library. You know, a logically distinct module that can be independently installed, loaded once, and independently maintained.
Besides, procedurally-generated RPC-handling code adds no technical debt to your project, anymore than the compiler's generated assembler output does.
You seem to want to replace the RPC shared library with a separate process. Not only will this make create a performance bottleneck, but also it makes it a single point of failure. If it crashes, all your daemons lose their connections. This is obviously highly undesirable, especially on servers.
> Well, that's the discussion you're having. I'm trying to talk about the design constraints and appropriate solutions for the problem domain...
I think I'm done with you. You deserve everything systemd will ever do for you.
> Seriously now, did you honestly think that he would say to use xinetd over systemd?
No... but I thought he might be able to pretty adequately explain how systemd exploits socket activation and contrast it with xientd....
> Do you honestly believe a developer will advocate the use of a competing piece of software over something (s)he produced?
Well, I've certainly done it, so it is possible, but I wasn't referencing him as a persuasive voice... Even if I was, that'd be such a flawed and pathetic argument...
> Unless you don't feel like replacing small, simple, easy-to-use, well-tested xinetd with the 200K-line pile of C code that is systemd.
You might want to look at the code. The socket activation logic is a pretty clean & tight ~90K chunk of code in a handful of files... and for the record, xinetd isn't that slim, with nearly 25K lines of code spead over well over a hundred files, and that's if you only count the C source files.
> As you can see, "socket activation" is by and large a marketing gimmick.
Sigh... I can see you didn't read the article. The implementation differences aren't terribly different, and Lennart already made your points for you... Systemd does have some little tweaks that open up a bunch of different worlds of advantages.
> Sir/madam, have you ever written an Internet-facing daemon?
Yes, but of course, in this context we're primarily focused on AF_UNIX sockets...
> Obviously the bulk of the RPC logic lives in a shared library. You know, a logically distinct module that can be independently installed, loaded once, and independently maintained.
it's very common, for example, for web apps to have a separate process that parses and validates inbound HTTP requests RESTful requests before passing them on to the main application process. You can and do run web apps that are directly exposed to the Internet, but nobody suggests this is to make the request processing logic more modular...
> You seem to want to replace the RPC shared library with a separate process. Not only will this make create a performance bottleneck, but also it makes it a single point of failure. If it crashes, all your daemons lose their connections. This is obviously highly undesirable, especially on servers.
I see you are familiar with Erlang. ;-)
You raise a good point. Often to reduce failure rates people employ load balancer that work with various HA protocols to avoid losing connetions. What do load balancers do again? Oh yeah, they are separate processes receive in bound RPC requests, parse and validate them, attempt to mitigate any in bound attacks before routing and forwarding them to the application itself...
And of course, a lot of web applications are largely front ends to a database, which means they themselves are processing RPC requests, formatting, validating and transforming them before forwarding them to a database for execution...
..and let's not get started about middleware... ;-)
> You seem to want to replace the RPC shared library with a separate process.
No. I really don't. I'm just pointing out that if you are looking for small, modular and loosely coupled components that are fairly resilient, it's not like someone is going to say that moving a component form a shared library to a separate process is going to get critiqued on the basis that it intrinsically makes for more tightly coupled code.
Or wait, are you suggesting that systems where all these libraries are rolled up in to one process would be more modular? [looks at critique of how systemd puts too much stuff in to one process...]
> The don't solve race conditions in peers trying to locate each other (surprisingly difficult).
Not even sure what you mean here. Are you talking about peer discovery? Because DBus won't help you there either--peers have to be aware of each other's DBus object paths before they can rendezvous. Similarly, two peers need to know where the common pipe is to rendezvous.
> They don't solve a standardized marshaling format.
Nor should they. There are a ton of ways to skin this cat in userspace, depending on what your application needs. Protobufs come to mind, for example, but there are others.
Why do you want the pipe to enforce a particular marshaling format? Does the pipe know what's best for every single application that will ever use it?
> They don't come with an implementation to integrate with main loops for event polling.
It's not the kernel's responsibility to implement the application's main loop. That's what libevent and friends are for today, if you need them.
> They have an inherent vulnerability in FD passing where you can cause the peer to lock up.
Last I checked, you pass file descriptors via UNIX sockets, not pipes.
> They don't handle authentication (well, sort of).
Depends on your application's threat model. The kernel provides some basic primitives that can be used to address common security-related problems (capabilities, permission bits, users, groups, and ACLs). If they're not enough, you're free to perform whatever authentication you need in userspace to secure your application against your threat model's adversaries.
It is unreasonable to expect the pipe to be aware of every single threat model an applications expects, especially since they change over time.
> You can get into deadlock situations in your messaging code if you aren't really careful about message sizes and when you order poll in/poll out.
It's not the pipe's fault if you don't use it correctly.
> They aren't introspect-able to see what the peer supports.
Peer A could use the pipe to ask peer B what it can do for peer A. Why do you want the pipe to do peer B's job?
> They make it super easy to not maintain ABI.
Nor does DBus. Nothing stops an application from willy-nilly changing the data it serves back.