Nope, my problem isn't reasoning about the lower level components. In fact you can assume I have correctness proofs for every piece of the infrastructure. What I don't have is the higher-level piece that can reason about the lower level components and do so when new pieces are added, old pieces removed, new guarantees enforced, old ones relaxed, etc. A static type system no matter how advanced doesn't help with any of it.
Yep, that's called the "halting problem", and I hear it's very hard to solve :)
Here's how I generally see it. Writing large software is hard (and a software composed of multiple components is large software, even if you decide using the network stack when calling some functions is a good idea ;)). "Type" people (and I'm certainly for some kinds of static types, I just don't think that they're the solution to everything) would tell you that types help restrict your computation in such a way that reasoning about it is easier. To that I say, but I want to do something specific which it's pretty annoying in PFP, and they answer, just do it a different way, to which I say, if I can do it differently, so can the compiler on my behalf. In short, I don't think that explicit types (i.e. those that can be defined in the language you use) are stronger than implicit types (i.e. anything that a verifier can prove about your program), and at least in theory, they are both "the same" (though, see another comment here on what I think about two things being the same in a mathematical sense). We should use whatever is easier for our minds to grasp.
The more essential problem -- although I don't know if it's more interesting or less interesting -- is what exactly do we want to do in our programs? If we require a Turing complete model, then no matter how many types -- explicit or implicit -- we throw at the problem, we just won't be able to formally prove properties we might care about. Some people, however, claim that we don't really need Turing completeness, and if we don't, that might open a whole new world of possibilities. But once we start restricting the computational model, the representation we use -- say, PFP or imperative -- might become much more important, each being able to compute things that the other cannot (we know that lambda calculus and Turing machines both compute the same functions, but once we restrict each in some arbitrary ways, it gets harder to find equivalences between restrictions in the PFP model and those in the imperative model).
Finally, I think that the notions of what it means "to reason about" or even what it means to be correct (and certainly how important it is to be correct) are themselves vague and require answers from empirical psychological/sociological/economic research than from math.
I think the people that say we don't need Turing completeness say that because their models can't really accommodate proper reasoning about exactly the kinds of systems I want to reason about. Proving theorems is nice and all but at the end of the day the temporal and dynamic aspects of the system I'm building are what I really care about. Not some static snapshot of it but the actual evolving gestalt.
Oh, absolutely, but there are formal methods for working with that, too -- well, maybe just the "evolving" part if not the "gestalt" -- mostly based on temporal logic (I can't believe I'm arguing on the side of formal methods now...) See, e.g. the Esterel language[1] used in the industry (directly or through its descendent languages) to write full, safety critical real-time systems. It is an imperative, non-Turing complete language, that is fully verifiable, and actually able to track an evolving gestalt for some restricted definition of "gestalt". Esterel, BTW, has had much more success than Haskell in the industry.
Another example is TLA+, which is used by Amazon to verify their (Java, I assume) web services[2]. Unlike Esterel, TLA+ is Turing complete, and can therefore fail to prove many things. It will also take longer and longer to run the bigger the code is, and therefore only verify algorithms, not entire systems.
Also, see the history of Statecharts[3], a predecessor to Esterel, that was designed first and foremost to assist humans in writing specifications and only later combined with Pnueli's temporal logic to facilitate formal methods.
BTW, the same David Harel who worked with Pnueli on Statecharts and temporal logic, is now involved with Behavioral Programming[4] a very interesting programming style that makes human specifications easier to translate to code. For a usage example, see Harel et al., Programming Coordinated Behavior in Java[5]
Then, it seems to me, you are not complaining about programming languages, but some higher-level tools. In production, the “static guarantees and beautiful abstractions” you write about do not become useless. They are just not meant to directly solve the larger-grain emergent problems of distributed systems.
That's exactly my point. PLT stuff is a minor nuisance for all the stuff that I deal with day to day. It might be the echo chamber but whether a language has support for monads, higher-order types, etc. is not even in list of top 100 things of my concerns and yet it seems like every other PLT researcher is constantly worried about some category theoretical construct or another and trying to jam it into yet another language.