The sad thing is that "easy to link and embed" and "permissive license" really didn't have to go together like they did in LLVM.
GCC could have provided a library that made it easy to do code generation or compiler development. That library could still have used the GPL, so that only Free Software could use it. Some of the work that has happened around LLVM wouldn't have happened around such a library, but some of it would have. (GCC has much more recently started down the "plugin" route, but too little too late.)
For instance, Swift might not have happened. But Rust may well have; a rustc compiler under GPL would not have broken anyone's ability to use it, any more than GCC's GPL prevents people from building proprietary software with it.
But instead, the GNU project chose to make it difficult to build any extension to GCC outside of GCC's own codebase. So instead of having an easy-to-use library provided by GCC under GPL, which might have furthered many of the GNU project's goals and encouraged more Free Software compiler development, they forced the development of an entirely separate compiler project to provide such a library. And the people who came along to do so, since they had to rewrite it anyway, chose to do so under a permissive license.
So, at least in hindsight, the Free Software world is worse off than it would have been had GCC provided a library infrastructure.
I feel like you're missing the point of the GNU GPL. The original idea behind the FSFs idea of open source was an ecosystem where every new development resulted in more open and free code. We're talking about people who truly believed in getting away from commercial software entirely. There were people who felt that one day, Linux desktops could replace Windows and even high end tools like Photoshop and Final Cut would have great open source replacements.
...but that never happened. Apple has tried to remove all GPL code from Mac OS. Where GPL code exists, it's only GPLv2 (like Bash; using an ancient version).
Even though all these companies are embracing opens source, they're embracing only permissive licenses. They're open sourcing the underlying tech stack while keeping all their offerings closed. If you're on Android, you are totally locked into Google services. I've met people who tried only using F-droid and running "open source phones" (minus the drivers of course). Their phones sucked. One of them switched back because she was tired of writing apps to do things she use to be able to do with the Play Store.
The dream of a GPL world where the software is free and you only pay for the hardware, is pretty much gone. We don't have federated systems, and where we do, no one uses them (GNU Social, Miro, Diasporia).
This isn't the open source world some of us envisioned back in the late 90s. I personally, will keep as much of my code under GNU GPL as I can.
> I feel like you're missing the point of the GNU GPL. The original idea behind the FSFs idea of open source was an ecosystem where every new development resulted in more open and free code.
First of all, "FSF's idea of open source"? They'd rather vigorously object to that phrasing. :)
But no, that was exactly my point: I'd argue that intentionally choosing to not allow the development of a library version of GCC, even under the GPL, caused a net reduction in the amount of Free Software in general, and copyleft-licensed Free Software in particular, that would otherwise have been developed.
RMS -- and perhaps the FSF more generally -- seems more concerned with preventing non-Free software, with creating Free Software as a means to that end, than increasing the volume of Free Software.
To RMS, non-Free software isn't just worse than Free software, its a net harm to society such that it's better to have no software than to have non-Free software. Its even often (apparently, to RMS) better to have no additional software than to have a lot of additional Free Software and some additional non-Free software.
I wouldn't describe the FSF's position that way (they value Free Software for its own sake, and their goal is that all software should be Free Software), but even if that were their position, I'm arguing that their handling of GCC and refusal to support it as a library directly motivated the creation of LLVM and led to a scenario that better supports the creation of proprietary software.
Providing a GCC library would have significantly lessened the motivation for LLVM: many of the "we need a library" folks would have one, leaving just the "we need a permissively-licensed library" folks, and the "GCC's codebase is awful" folks. (And even some of the latter group might have worked to fix GCC.)
That's not the only thing done wrong with GCC development. The much worse problem is that GCC doesn't really support reviewing and accepting patches; you pretty much have to have commit access to do any significant work on GCC.
> I'm arguing that their handling of GCC and refusal to support it as a library directly motivated the creation of LLVM and led to a scenario that better supports the creation of proprietary software
I'm sure they are well aware of that. I don't see how they could have handled it differently without compromising their core values. Allowing GCC to be wrapped by proprietary bits would be unconscionable since it curtails user freedom (to modify & improve the software).
The differences in core-beliefs between Apple and FSF are irreconcilable: the former is pro-Tivoization and the latter is anti-Tivoization. Your stance on Tivoization ought to be a fair indicator on which side of the fence you stand, and no side will ever be able to convince the other side: at best you can agree to disagree.
yeah. Software can be both libre and not free (as in you have to pay for it). The only difference being that once you paid for it, you OWN it, the same as you own, say your shoes
(I was going to go for the car example, but that is less and less true. We do not own our car anymore.)
It's unreasonable to expect commercial software companies to do all the work to build a free software ecosystem, since that's not what they're about. Still, they've made some substantial contributions. F-droid might not be good enough yet, but they're still a lot further along than they would be starting from scratch, without Android.
So although it hasn't succeeded for phones yet, building on top of tech stacks contributed by others (who haven't fully bought in to the same goals, but are sympathetic in some ways) still seems like a great strategy. The free software community needs to finish the job.
On the other hand, going fully independent and building stuff separately without those contributions (which seems to be what some free software folks want to do with commercial-unfriendly licenses) doesn't make a whole lot of sense to me. If you can't succeed even with substantial help, how do you succeed without help?
The web itself is a federated system, running largely on Free software, built to open specifications. (Email, too.) The idea that we don't have federated systems and that those that do exist are unused isn't particularly based in reality.
It's true that the Free Software federated systems that are more strongly based in ideology that bringing use-value to users have generally failed to experience uptake, but that should probably not be considered a surprise.
i'm not the big android user, but having used cyanogenmod with fdroid and free apps exclusively for a bit, i'm quite impressed with the free software userland, especially offline maps and puredata-patch runner, pddroidparty.
pity about the drivers though. at a speech in copenhagen stallman was encouraging people to start reverse engineer these.
it's nice that at least some of the phones are somehow open.
Isn't the argument above that he was wrong? He may have been morally right that free software is an inherent good over proprietary software, but that's sort of a given for this discussion.
He was tactically wrong in taking an expansive view of both what free software licenses should enforce and what fell within their realm of enforcement. He was hoping that GCC would be so much better than the competition that people would use it despite it being GPL, and for a long while (e.g., NeXT's Objective-C patches) it was. But the GPL, especially the GPLv3, was perceived as way more onerous than Stallman expected it would be, and eventually it became worth people investing heavily in a competitor just to avoid the GPL. If he had aimed a little less high with the GPL and admitted a plugin architecture / documented IR for GCC earlier, then LLVM may never have taken off.
Alternatively, he was tactically wrong in thinking that strong copyleft was required to achieve the goal of software freedom. Note that the clang in the NDK "is now a nearly pure upstream clang," not because they're required to, but because the pace of modern software development makes it genuinely easier to voluntarily submit changes upstream.
If Apple takes a stance against GPL because they don't perceive it as in their commercial best interest, that doesn't mean Stallman was wrong even in a 'tactical' sense.
Of course Apple does not perceive the GPL as in their commercial best interest. Back when NeXT was writing what would eventually become Apple's toolchain, Steve Jobs tried to convince Stallman that he could come up with a plugin architecture and keep the Objective-C bit private. Stallman went to his lawyer and came back and said, no, you can't do that.
Steve Jobs' perception, and NeXT/Apple's perception, never changed. What changed is that in the NeXT days, GCC was so much better that Apple said, we'll make the tradeoff of using GCC even though we don't want the GPL. A few years ago Apple said, we'll make the tradeoff of avoiding the GPL even though we want GCC. And they went and invested significant resources into LLVM.
The GPL was meant to be a tool to get people who did not perceive free software as in their best interest to contribute to free software anyway. It might not have changed whether they were "haters" as a matter of opinion, but it was definitely intended to change their concrete actions. And it did -- for a very long time.
The dream of a GPL world only ever worked if the entire ecosystem was GPL and it was possible to derive a living from ecosystem. The revolution never happened. One could even argue that RMS' more civilized age never really existed outside of the MIT AI lab so it was doomed from the beginning.
Due to his stubbornness in not accepting the realpolitik the savannah has now become a reservation that is shrinking daily as people abandon it for more liberal climes.
In GPL world any user is a developer too. This might have been true at the begining, but it is very far from true now.
That's what RMS failed and still fails to grasp.
I think you've got it backwards; In GPL, every developer is a user. This becomes apparent when you see why the GPL came into existence and what it sets out to accomplish (Safe guarding users' freedoms at the expense of the developers').
And GP's point is that a) statistically speaking, pretty much all of the users of software today are non-developers, and b) these users don't care about the freedoms that the GPL seeks to protect. Even in the small subset of users that are developers, the popularity of permissive licenses indicates that most would rather just use and open up source code without worrying about restrictions imposed by others or imposing restrictions on others out of some sense of morality.
The upshot is that the GPL just creates additional friction for developers in the pursuit of ideals that, statistically speaking, nobody really cares about. This is why it is misguided.
The primary purpose of the GPL is to insure that users have the freedom to modify and redistribute the software they use should they choose to do so. Whether the user does so or not is irreverent. That's why I don't think it's a matter of whether end users care, or how large of a percentage they form of the overall user base.
Developers who decide to license their work under the GPL do so to safeguard their own freedom and the freedom of their users. To that end, I think the GPL hits its mark.
What GCC and RMS did (and I suspect their goal was all along) was to change the Overton Window [1] of acceptability for open source.
Remember this was still the time that Solaris was not open source, and Microsoft was borging competitors . It was not definitely the Visual-Studio-is-open-source time.
By creating a hard line, and building a compelling alternative to proprietary compilers, he introduced a seed of change in the minds of young programmers at that time .... All of whom are senior management now and share a belief that things like Protocol Buffers should be open source.
In fact, its interesting that Android is inducing this change all over again (not Linux). Its a viable answer to "its not as good as an iPhone,why should I buy it".
The true genious is bill gates who invented closed source software in 1976 letter to the hobbyist.
Now everybody cannot think of sofware as naturally different than propietary.
Hence giving an opportunity for a natural monopoly. Bill gates used security an argument a lot, and only became a little more secure by using permissive open source or buying software from poorer concurrent that had no good backup from their banks.
Bill gates is the baron de Beaumarchais of computers : whining about being riped of software he did not created because people would give the interpreter so that their code could be used.
A right on the second creation.
Just for the record Beaumarchais, whining for special treatment as an artist. He was a smuggler, a weapon dealer, a baron, an aristocrat, and a revolutionary AND he created the fundamental rights of authors.
The true fundation of liberalism: as long as it works for me, let's do revolution.
The FSF ideology was born at a time where the level of bidirectional mistrust between closed source software vendors and open source advocates was a lot higher than it is currently. The world has evolved since then and the business world now embraces open source, possibly for reasons that have nothing to do with ideology (shared r&d costs, commoditising lower layers of the stack and the realization that you can just as easily get lock in with open source software as you can with closed source). In a world where even Microsoft is releasing more and more of its crown jewels as open source every given week, the FSF position can appear out of place but this I think is a testament of how far we have come on the original goals of the project.
The dangers that befell things like the Gosling Emacs code, which informed GPL and FSF, haven't gone away. Companies like Apple are actively working to erode any progress that has been made on this front because of their intense levels of investment in proprietary lockin.
Yes they were so actively working to erode progress that they offered to license LLVM under the GPL. But because RMS uses an e-mail client written in the last century he missed the offer.
The Rust team knew from the outset that repurposing a C++ backend was the only pragmatic way to compete with C++ on performance given their manpower constraints and projected timeframe. Even if all available backends had required the use of GPL, Rust would have still chosen one and moved forward with it.
Fortunately, given that Graydon did prefer a permissive license for the compiler, LLVM provided exactly what he was looking for.
I'm not sure if this will stoke a flamewar, because I've never seen a discussion here on this (might just be hanging around different articles). Is there any technical reason to prefer clang over gcc other than ideological reasons? I get the modular nature of llvm is useful, but is the android NDK using this feature?
Quick google search shows clang lagging gcc in all but one test[0] at least on Intel Broadwell. Again, what is the benefit other than permissiveness of the license?
I don't know much about Android NDK, but in general, people use clang because:
- Clear and readable diagnostics -- especially useful when debugging nasty C++ template stuff where the type signature is a page long. (GCC is catching up in this area)
- Modular design -- if an IDE wants to have syntax highlighting or an Xcode/Eclipse/IntelliJ like "fix it" feature, they can use the clang frontend to parse the code. Also, GCC's "big ball of mud" design (iirc they do some optimizations like constant folding in the parser) makes it harder to hack on unless you're already pretty familiar with the codebase.
- Less hostile community -- people have tried to clean up GCC but their patches were rejected (see the RMS email link below)
- Helps you with standards compliance -- clang will still compile code with compiler-specific extensions but it has an option to output a warning. There are some codebases that have come to rely on these behaviors so clang's warnings can help make sure you don't end up tied to one specific compiler.
- Personally, I like AddressSanitizer (ASan). It does similar checks as Valgrind, but using compile-time instrumentation -- just pass in an additional flag. Gets rid of a dependency.
ASan and UBSan are the same in the two; both the GCC and clang instrumentation shims were contributed by Google, and the resulting instrumentation is the same.
in contrast, i recall pre-egcs gcc being very easy to hack and port.
egcs, on the other hand, was a much needed kick in performance and features of gcc, but had the negative of making it harder to maintain and understand.
lcc was a good example of moving back into the simple compiler design, but it seemed to have stayed more in academia.
Here's one incredibly awesome feature of Clang: Windows support (thanks to Google, actually).
No Cygwin/msys required -- it implements the Windows calling conventions, exception handling, etc. You can link modules compiled with Clang using the Microsoft linker.
mingw (now mingw64) has been doing that for ages as well -- I'm too busy to check right now, but I suspect since before clang supported windows.
gcc has had support for every operating system under the sun at one point or another - I fondly remember using the DJDelorie version for DOS+Extender, and if I'm not mistaken, there was also a release that could do WIN32 on Windows 95 from delorie.
Clang had a number of benefits over GCC such as better error messages and integration points for IDEs (syntax checking/etc) which caused Apple to switch to it (on top of licensing). It may have been faster, not sure, but the internals were supposed to be MUCH cleaner and easier to work with.
I know a number of these issues have been improved/fixed in GCC over the last few years. Outside friendliness to internal tinkering and non-GPL software I'm not sure if one as serious technical superiority. I believe clang is written in C++ instead of C, for whatever that's worth.
GCC has always had a much bigger set of supported platforms and processors.
For development, having full access to the AST at will is a definite good reason. Having such allows you to analyze your code as you're writing it in much more intimate ways, allowing for better refactoring tools, better error analysis, better variable completion, etc. Additionally, because of the modular format of clang, pretty much every build is a cross compiler. You don't need to rebuild clang to get support for arm and x86, for example. There's even targets for GPUs, so you can use the same toolset top to bottom much more effectively.
Better security features for one. Clang supports SafeStack and Control Flow Integrity. SafeStack is much more performant than gcc's stack protectors, and gcc offers no equivalent of CFI.
So, I'm not familiar with all of the CFI protections available in clang, but is full CFI actually implemented in CLANG?
Especially with C/C++, there are a lot of potential attack vectors that would need to be closed (stack smashing, ROP, Vtable corruption). From a laymans perspective, some of these need full memory corruption protection.
Any references would be appreciated. The last time I had to adress CFI in C, static analysis (with greater precision than CLANG had) was involved.
It's an entirely subjective argument - but I've felt the warning and error messages from clang were so much better than those from GCC that developing under clang became a no-brainer.
In addition to what other people have said, Clang and LLVM are improving at a much faster rate than GCC.
So while they may still be lagging in some areas, I'm confident they'll get ahead of GCC in those areas long before GCC catches up in the areas where it's behind.
The permissive license of LLVM means that you can build it into tools. So in that one case, the political difference of the license does lead to a technical difference.
How diverse are the CPU manufacturers for android phones? The major concern when it comes to Clang is that developers will need to buy proprietary plugins to compile open source code that depend on proprietary features, or to have application run in native speed.
Majority of Android phones out there are Qualcomm Snapdragon these days, and for the foreseeable future as they just inked an exclusive deal with Samsung. MediaTek SoC was already a proprietary nightmare of unreleased kernels and toolchains except for the leaked MT6589
In "The Right to Read", "license" refers to a license from the state, like a driver's license or gun license; not a license from the vendor (a copyright license). Vendors licensing proprietary compilers isn't something that TRR predicted--it was a common practice before GCC became dominant.
As someone messing with LLVM a lot.. basically LLVM is where a ton of development effort is going. GCC is a dead end it seems like. LLVM is designed to be extended where as GCC.. well, usually doesn't really seem to have been designed.
It, sadly, goes deeper than that, past architecture to philosophy.
LLVM is designed to be modularized and embedded in other binaries and toolchains, which makes it easier to build extensions and tooling around it.
In addition to having a mountain of legacy code, gcc's project steering is philosophically opposed to allowing non-free (as in GPL) plugins [https://lwn.net/Articles/582242/]. Because of this, developers looking for the most flexible toolchain seem to be backing slowly toward the exist regarding gcc, as the Clang / LLVM steering team has made clear that they don't see similar incompatibilities between their tool and the way developers choose to use it.
There is a project going on to break the gcc down into components [https://gcc.gnu.org/wiki/rearch], so time will tell what comes of these two toolchains.
> gcc's project steering is philosophically opposed to allowing non-free (as in GPL) plugins
I think that's the right philosophy. Why shouldn't you be contributing your plugin if you derive value from it, so that other people can derive value from it just as you did with GCC (if you distribute it that is - if you wrote it inhouse for internal use, then it doesn't matter at all!).
It's not like the code generated by GCC is licensed virally.
It's a shame that GCC's code is hard to modify, and difficult to extend. LLVM's permissive license means that contributions don't flow back unless it's favourable to the entity contributing it (such as good PR). I imagine that if somebody/some company wrote an excellent, non-obvious, non-trivial toolchain using LLVM, derive profit from it, they will not _want_ to contribute it back.
The philosophy went farther than, "We want GCC plugins to be free." It went to, "We want GCC plugins to be basically impossible, because otherwise the risk of non-free plugins is too high."
So, the FSF sacrificed technical needs in order to serve philosophical goals.
GCC has been designed specifically NOT to be extended.
Richard Stallman reasoned that by exposing an interface to other applications, or allowing other applications to access or modify intermediate code/trees, closed source proprietary applications could be built around GCC.
He was probably right ... but we all ended up with a worse compiler.
More than that; we ended up with a crippled Emacs. What could and should be a full-featured IDE built into a self-documenting programmable Lisp-derivative virtual machine is instead a continuation of the "good-enough" conglomeration of regexp-driven hacks that continues to drive new engineers to real and productive solutions produced by the likes of Microsoft.
"GCC has been designed specifically NOT to be extended."
Several extension mechanisms have been built out in the last few years, including plugins for intermediate passes of compilation, and a library to operate as a JIT.
Those are all general issues that would apply to developing GCC, aside from compile times and tooling integration they don't effect NDK users. So the question goes back to being specific, why deprecate GCC while it's still popular elsewhere and well supported on this hardware?
A), I think you mean "deprecate", it's been depreciating just fine by itself.
B) If you ever want to make a change to the toolchain, HARD SWERVE on supporting GCC. It's a liability if you need to modify it at all. This may have more to do with what Google wants to support rather than any effect on the end user.
I am pleasantly surprised that so many people have taken the time to read the source code of multiple compilers and have an opinion on why one is better than the other.
Speaking of LLVM's "modular" architecture ; LLVM's architecture is so modular that projects like the Rust compiler or Emscripten use their own modified forks of the LLVM tree ...
What's the point in being "modular", if you can't add a backend nor a frontend without having to clone-and-modify the whole LLVM tree?
(And what if I want to compile Rust to Javascript? Which of these incompatible LLVM branches should I use?)
Rust does work with LLVM releases. In the past it didn't because of various bugs in LLVM, but all blocking bug fixes landed, where "blocking" is defined as those preventing compiler bootstrap. Currently, Rust CI checks for every commit that it bootstraps with LLVM 3.7 branch.
Rust maintains the fork because in the foreseeable future, there will be non-blocking bug fixes and Rust will not follow LLVM trunk to get them.
The title is a bit misleading. It is deprecated in the context of that project. Meaning they personally won't support it anymore. It's not deprecated in general as a compiler.
It's a pain to format text. Things like: printing an int in hex, printing a zero-padded int of a specific length, printing a float to two decimal places. And having those formatting changes be stored silently as global state leads to unexpected behavior.
FWIW, most of these options revert back to default as soon as you format something; but I agree that I would have preferred iomanip to generally be a wrapper around values rather than a silent modifier of the stream.
(...and if it makes you feel any worse, I got curious when I saw your response which ones didn't reset, and it turns out I had it backwards: most of them don't reset, except the one that does :/.)
The same way you'd do it in printf? Stringize data and insert it in the middle of static text? (Pardon my inelegant code, I've been away from C++ and doing Erlang for quite a while now.)
void printStuff(ostream Os,
string WarningPart, string SizePart, string EndPart, int Size) {
//The English version might look like:
//"Warning: Size too big (200). Get smaller."
Os << WarningPart << SizePart << Size << EndPart << endl;
};
I expect that if I thought about the problem for a few minutes (and was unable to find any C++ i18n libraries) I would come up with something much nicer and easier to use than this thing I spent ten seconds working on. :)
POSIX printf allows a translator to write a conversion spec like "%1$d" that says which arg to print. This lets you do stuff like put currency before or after the amount depending on what's idiomatic for the locale, without recompiling (if you lookup your format strings at runtime).
That solution is good, but it assumes you can break the message up into those parts for every language and use the same iostream sequence for every language.
With printf, we have gettext. I don't really see how there's any analogy for gettext in iosream-world.
I dunno, I think bit shifting a string to write to a file is pretty unreadable, not to mention C already has rock-solid string formatting options. It's an miasma of operator abuse built into "hello world".
Do you have an actual response? How is iostream any better than *printf? At least you can read the format of the former without needing to google un-googleable operators.
In addition to the already-mentioned type safety and user-defined operator<< /etc overloading, it's very liberating to read into a std::string and not have to do a bunch of memory management and worry about null-termination or buffer overflows.
What consistent mechanism do you use to printf an instance of an arbitrary user-defined type?
With ostream, you implement
ostream& operator<<(ostream&, const Class&)
and you're done.
with printf, you have to have a convention and remember to stick to it. (Do I call .toString(), .stringize(), .getString()... ??)
For one-off things, printf is great. You take it from me when you pry it from my goddamn cold, dead, hands. However, for real work io*stream are rather nice.
With printf, you do have to look at the docs to see what the string conversion function is. With iostream, you still have to look at the docs to see if the ostream operator is implemented for the class, it's certainly not universal.
It has been so long that that bit of information had dropped out of my brain. I went to verify, and -obviously- there is no auto-conversion for instances of a non-trivial class.
I never said io*stream support was universal, [0] I said that it provided a _consistent_ mechanism. :)
[0] I mean, it's obvious that anything that you don't get for free out of the box is bound to _not_ be universal. Edit: Actually, the thing you get out of the box when you present an instance of a non-trivial user-defined class to both printf and ostream is equally useless... unless you're debugging and are interested in the address of the instance of an object.
> ...I'm just questioning whether this consistency buys you anything.
It buys you the same thing that API consistency always buys you: reduced cognitive overhead when dealing with a given API. :)
Edit: As GFK_of_xmaspast mentions [0] instances of non-trivial classes are not compatible with the default implementations of "operator <<" in ostream. So, you can try "ostream << instanceOfNonTrivial". If it passes the compiler, then you know that operator << is defined for that class.
So, no need to check the docs... just ask the compiler. :)
I find that how I print my data changes depending on why I'm printing, making a convention necessary anyways. Is the ostream overload the user-visible format, the test-failure format, or the serialization format?
noob question - what is the use case for a program that operates on arbitrary user defined types?
Dynamic typing sounds lazy and slow in my head, might as well use python at this point
> ...what is the use case for a program that operates on arbitrary user defined types?
That's not what's going on. Neither C nor C++ really permits that.
We're just talking about the relative merits of using printf and friends vs. using iostream to format data for the console or disk.
In either case, you have to write the code for your class that massages the class's internals correctly, but the question is, do you write to the ostream extraction and istream insertion API, or do you write to the substantially less well defined printf-and-friends API?
printf has type checking in every major compiler. You just need to annotate all of your printf-y functions with attributes to tell the compiler "this function does printf, the format string is the nth argument, and the format arguments start at the mth argument".
It doesn't work with non-constexpr format strings, but there aren't many justifications for those.
NDK is only for Android (as in a C++ toolchain for Android, the toolchain runs on Windows, Linux and Mac but the resulting executables are meant for Android) ... what do you mean with Win64 in this context ?
Some comments about how GCC could have avoided this made me think that this is a setback for free software, but why isn't this just a rising tide benefiting both free and proprietary software?
The NCSA license explicitly allows sublicensing, so both the GNU project and Microsoft can create derivatives under free and proprietary licenses respectively, for example.
This comment breaks the HN guideline that says Please avoid introducing classic flamewar topics unless you have something genuinely new to say about them. It has triggered a devolutionary cascade that so far has made it as low as "Wow, what an asshole" and "man who thinks sex with children should be legalized". Blasting craters in the threads like that damages this site. I'm sure you didn't mean to, but please don't do it again.
I used to think the same as you, but I came around.
I am annoyed by the FSF dogma because they lay an exclusive claim on the word ethics when applied to software. I consider myself an ethical person even in software. But my definition of ethical is less strict than the RMS definition.
>they lay an exclusive claim on the word ethics when applied to software.
I'm not sure that's a fair characterization. They've defined an ethical framework for software and then they discuss things being ethical or not within that framework.
Anyone is free to use a different ethical framework and decide if an action is ethical or not within that framework instead.
Personally, I'm annoyed by RMS because it's annoying when someone I disagree with on so many things turns out to be right so often. I think they call that "cognitive dissonance."
He's consistent to a point. He's quite silent about the hardware aspect, even though free software licensed CPU designs have been available for a long while.
If you have the design, and you have access to FPGAs, I'd say that's got you pretty close already.
Edit: We were talking about CPUs, not complete systems. And yes, the performance would be poor, but RMS has already strongly established that in his book performance is a distant runner-up to free.
Assuming you have an open FPGA toolchain (which AFAIK exists for like a year or so), open FPGA designs supported by that toolchain (AFAIK doesn't exist yet). Even if you don't insist of the FPGAs design being open (which is a rather arbitrary border then), you'll have a hard time getting something with performance people actually want to use in the supported chips. It's a nice dream, but not practical right now.
By pretty close you mean no where near a usable system then you'd be correct. If you're not a hardware person, as I'm not, I don't think I could go from blank fpga and parts to a working laptop with a keyboard, trackpoint, and lcd screen in any reasonable amount of time, nor with any certainty that it would work and not be error-prone.
On the other hand, it's very difficult to build a design for an FPGA using only free software. (Until very recently, it was outright impossible; now, it's merely very difficult, and requires that you use certain very specific FPGAs.)
>By pretty close you mean no where near a usable system then you'd be correct. If you're not a hardware person, as I'm not, I don't think I could go from blank fpga and parts to a working laptop with a keyboard, trackpoint, and lcd screen in any reasonable amount of time, nor with any certainty that it would work and not be error-prone.
The exact same argument can be made about the operating system.
If you're lucky it's off-chip and your soldering iron and a very sharp knife will come in handy.
The problem with 'patching' hardware these days (oh, I'm that old) is that most of the time you'll find your problem is located inside a chip, and it isn't the traces are in planes that you can't access (if you're lucky they might run in a spot where you can dremel through and then cut and solder two small wires to the buried trace). Via's don't help either (especially not in layers that start and end under BGAs).
Patching hardware was never easy, but with todays degree of integration of components and SOCs it is harder than ever and frequently downright impossible.
Lots of things have gotten easier since the hole-through era, but hardware fixes aren't one of those.
That depends. Microcode tends to be used in CPUs rather than regular ASICs and not all CPUs have re-writable microcode (though quite a few of them do). So unless you're talking about a CPU the answer is probably 'no'.
You probably have a bigger chance that there is an FPGA on board that you can re-load.
I've "patched" my CPU (using a pencil to close some leads on an old AMD cpu to enable overclocking) and I've "patched" a motherboard (replacing a blown capacitor). But that was analogous to patching bugs in your closed source software with a hex editor.
Eh, IDK about that. It's pretty well-understood how to make a simple but usable OS kernel. I think most competent programmers would be able to do it (or at least know where to start) given enough free time.
>RMS has already strongly established that in his book performance is a distant runner-up to free.
To him, maybe. Not all of us have the luxury of getting paid to be a techno-luddite, using antiquated workflows and having the spare time available to re/write drivers when necessary.
If I made the same performance/freedom trade-offs I'd be completely unable to do my job.
The fact that you consider performance and freedom to be a trade off means that you've already given up even the slightest hope of freedom. You as a developer and a user should want the freedoms afforded to you by the GPL or any copyleft license for that matter. In the ideal world your high performance software would be released under the GPL you would have the best of both worlds. The war won't be won by rewriting every single tool and releasing it under the GPL, it will be won by pressuring existing companies to license their software under the GPL.
The people who want non-copyleft licenses to succeed are those who which to make a profit from the control and ignorance they can impose on their users by closing their source and platform or those who are willing to trade their user's freedoms to appease them.
Businesses have a huge amount of leverage on the OSS software ecosystem and so it's really no surprise that the licensing choices are to benefit the businesses funding the development rather than the users. It's to be expected but no less sad. There's a small group of passionate people who have basically dedicated their lives to making the world a better place by writing software that's truly free and respects its users, they built an entire community and movement around the idea, and actually stand by their beliefs -- yet people dismiss them because the company worth hundreds of billions of dollars has a better UX.
>The fact that you consider performance and freedom to be a trade off means that you've already given up even the slightest hope of freedom.
True, mostly thanks to shitty companies like Apple, Microsoft, Autodesk, etc.
>You as a developer and a user should want the freedoms afforded to you by the GPL or any copyleft license for that matter.
I do.
>In the ideal world your high performance software would be released under the GPL you would have the best of both worlds.
We don't live in an ideal world and never will.
>The war won't be won by rewriting every single tool and releasing it under the GPL, it will be won by pressuring existing companies to license their software under the GPL.
This doesn't work, especially in software for engineering where there are often sole, hegemonic powers and established monopolies and where the cost to enter the market as a new competitor is non-trivial. We're not talking a desktop manager or a text editor here, we're talking millions of dollars of R&D for things like CFD suites. Open alternatives exist (like, say, OpenFOAM) but they often lack accreditation/certification/rigorous testing and when you're dealing with people's lives the choice is often proscribed entirely.
>The people who want non-copyleft licenses to succeed are those who which to make a profit from the control and ignorance they can impose on their users by closing their source and platform or those who are willing to trade their user's freedoms to appease them.
There's a financial incentive to creating walled gardens and proprietary software that you can charge for.
>yet people dismiss them because the company worth hundreds of billions of dollars has a better UX.
When the choice is between "being able to function at my job, at all" and "use free software", well, the choice is clear.
I wonder what kind of world we would have if GNU were completely defeated, if no one released anything under the GPL anymore. Sure, free software would still exist, but nobody would call it "free software" and no one would prevent anyone else from creating non-free derivatives. Nearly all software would be "open core", with some free software here and there but with many or most of the interesting components non-free.
We're already very close to that world. Many people seem happy to see GNU losing ground every day. For myself, I am not sure that we are working towards the best possible future, but I'm not even sure what that future should be.
We used to call that "public domain software" and it was big in the 80s from what I recall (I was a kid then). You'd release something to the public domain and people can do with it whatever they wished. Then GNU came around and ate everyone's lunch, for better or worse. I suspect for better.
Maybe GNU is just a stop-gap movement to get everyone on board the FOSS train, who knows. BSD licensing hasn't caused any apocalypse yet and arguably it provided a networking stack for Windows that was superior to anything MS could rush out the door back in the NT 3.5 days. There's something nice about being able to put the code into commercial products without worrying about strict GNU/GPL-like conditions.
It was Microsoft that came and ate public domain software's lunch, not GNU. GNU and the GPL were not needed until aggressive copyright enforcers such as Microsoft came on the scene. Bill Gates wrote a famous rant calling anyone that shares code a thief, essentially. The GPL was a reaction to that attitude and has saved the culture of sharing code.
If most people defaulted to sharing code like was done pre-Microsoft then BSD would be fine. Read RMS's rant that is linked above. He's still at defending against "adversaries" of Freedom. This is war!
OK, so even MS is now open sourcing (some of) their code. Maybe it's not "war" anymore, but you need to understand the history a little.
> Bill Gates wrote a famous rant calling anyone that shares code a thief, essentially. The GPL was a reaction to that attitude and has saved the culture of sharing code.
Didn't Gates' letter come out ten or more years before the first version of the GPL.
Going back even further, we used to just call it "software". The idea that software could be copyrighted didn't even happen until the 1970s, and it took a while for the idea to really take hold.
This is why rms is the way he is. He's old. He remembers the days when all software was free. He's been trying to get that back ever since.
RMS is an idealist. If people actually made what he proposed (envisioned), world would be much better place.
It is hard to see the entire impact of Free software and even harder to predict how the world would like if people actually stood up to it but I can say that I am sure we would have:
- more secure Internet
- more decentralized/distributed Internet
- no ISP blocks
- much harder if not even impossible NSA spying
- no disgusting DRM
- no patents and patent trolls -> more inovation
- no force updates, no abandoned users
- more competition in hardware and software field
- things being developed for people and not sheer profit
- ...
- add your points
You listed a bunch of things without substantiating. For example, how would free software prevent the NSA from spying? Internet backbone taps don't care if your HTTP implementation is running under Windows or Linux...
> how would free software prevent the NSA from spying?
A lot of hard problems come from the fact that you need lots of developer attention to build and maintain solutions that just work. The domain is tricky and complex enough to where, you can make it zero-effort for one configuration, but making it zero-effort for all possible or even all likely configurations takes more resources than is available.
The example relevant here is cryptography software. It's not hard to encrypt stuff. It's hard to make end-to-end solutions that just work, for all possible uses and across all possible platforms.
Free software has a restrictive effect of only making software platforms that are open and extensible, vastly reducing the effort needed to maintain end-to-end, user-friendly encryption.
It would make it pretty much impossible for the NSA to spy on citizens if we only had free software platforms to support.
Not all answer need to be pure technical ones - they don't go and search person by person via manpower but they use meta data and algorithms. Simple overflowing their meta data would force them to use other means. Also you forget social impact - if majority of people actually listened to RMS that means majority of people would be very privacy and security minded and would press the government to change the laws (NSA spying is illegal anyway but with majority of folks actually actively pursuing this would made their existence hard or at least not possible to just go and spy and ruing privacy of entire planet).
RMS believes that end user of software / hardware should be in complete control, it was/is never about developers.
BSD/MIT/etc lic may provide for more developer freedom, but often they are used to restrict user freedom as the code is folded/packaged into commercial projects with non-free licenses
It's a somewhat artificial distinction, since the user of a piece of software becomes a developer to modify it.
And to preclude modifications which prevent him from effectively monetizing his labour is to infringe on his freedom.
Prehaps it would not be if money (etc.) didnt exist. It's always the way, however, with idealists to prescribe policies that make people free in their utopia but cause chaos in our reality.
> And to preclude modifications which prevent him from effectively monetizing his labour is to infringe on his freedom.
This sounds a lot like the people saying that adblocker software is infringing the freedom of people trying to monetize their websites (or maybe even "stealing" from them). And I think the same response applies: the fact that your business model isn't compatible with adblocker software, or with my choice of software license, isn't my problem.
Plus it seems extra presumptuous to claim some kind of inherent right to profit off derivative works of my software. You can only distribute derivative works of it at all because I've granted you a license. A royalty-free one at that! With the only condition being the copyleft condition: that you keep any derived works as open as the original. If you don't like that condition, we can negotiate a commercial license on whatever terms you want, with appropriate payment. Or you can write your own software...
> Plus it seems extra presumptuous to claim some kind of inherent right to profit off derivative works of my software. You can only distribute derivative works of it at all because I've granted you a license.
Sure, but one's choice of the GPL has signalled that one wishes that others be permitted the freedom to build upon the work and do as they wish [0] with their additions -as well as the original work- just so long as they -upon request- distribute the original and their additions to folks who have received the binaries.
[0] This includes charging for access to compiled versions of the software, access to support and documentation, etc. etc. etc. [1]
It's just an observation about which licence maximises the freedom of the user.
Since we're in a capitalist system, monetization of one's labour is hugely important from real-term freedom. To deny people that is to make them less free.
So the most permissive licence is the one providing the most freedom.
> Plus it seems extra presumptuous to claim some kind of inherent right to profit off derivative works of my software.
Who's claiming that? On the other hand, I've met more than a few pro-FSF people who seem to think it's cromulent to claim that I don't have the right to profit off my work (which is insane; you can pick whatever license you'd like, copyleft or not, and so can I).
You aren't profiting off your work, you're piggybacking off their work with just enough tweaks to sell it. Go write your own software if you want to profit off your work. Then you won't have to deal with the copyleft licenses because you won't be using anyone else's work.
I think you should re-read my post. I don't have a problem with other people choosing to use the GPL, and dealing with its terms or choosing not to use it. I have a problem with the temerity of people like the FSF insisting I use copyleft or I'm somehow immoral.
I think you have likely confused what the pro-FSF people say...
Likely they said it is unethical for you to distribute software with out access to the code. Which you interpreted to be against profiting since you likely believe that open source == no profit.
I'm well aware of how to make money with open-source software. I do it. I am working to no longer do it, because wage slavery (be it W-2 or under the guise of "consulting") sucks.
Turns out that you need capital to do that, and the FSF's insistence that digital bits as capital is immoral is silly.
That has nothing to do with what I've said, and I don't think you understand what capital is. The GPL is fundamentally incompatible with the notion of software as a capital asset; when its value as a salable object is effectively zero due to the ease of marginal duplication, one's business model becomes a labor-oriented one, i.e. consulting. Which means burning one's time, which is what one acquires capital assets to avoid doing.
But, clearly, I'm so misinformed. I obviously haven't thought this through. I didn't read everything Stallman wrote a decade ago. And I certainly haven't read the GPL (both v2 and v3) line-by-line to understand its ramifications.
It's a pity you felt like you needed to be an asshole about this.
> ...I don't think you understand what capital is.
I -uh- do.
> The GPL is fundamentally incompatible with the notion of software as a capital asset; when its value as a salable object is effectively zero due to the ease of marginal duplication...
Two things
1) That applies to all software. All software -regardless of license- has an effectively-zero duplication cost.
2) You can still sell software distributed under the terms of the GPLv2. You can charge any price you like.
Calling me names doesn't change the falsity of your statement, which was:
> [T]he FSF... [insists] that [treating] digital bits as capital is immoral[.]
This isn't their position. It never was.
You seem to think that the only way a thing can be considered capital is if one retains the exclusive rights to control the distribution of the thing. This is an over-narrow definition of capital. :)
> 1) That applies to all software. All software -regardless of license- has an effectively-zero duplication cost.
That's not true. Copyright allows the copyright holder to enforce a cost of copying in their license fee. You can illegally copy software at zero cost, but that's not the same thing. I can illegally ignore the GPL, too, but I won't.
> 2) You can still sell software distributed under the terms of the GPLv2. You can charge any price you like.
You're being disingenuous. The purchaser can and will inevitably redistribute the software free, plunging its effective value to zero and turning the only tenable long-term monetization strategy into a labor business. Even Red Hat is a labor business in all but name, they live off support contracts and custom development, and I would be incredibly surprised if you could find a business of nontrivial size--say, $2m/year in revenue--whose products are non-SaaS (i.e., not violating the spirit of the GPL while complying with the letter) and GPL-based for which this is not true.
I mean, this is not some shocking revelation I'm saying here--there's a reason the GPL and especially the AGPL are so popular as the crappy side of dual-license schemes, because it allows a company to hamstring those who would derive from their software while not crippling themselves.
FSF at least supports the concept of government granted monoplies over ideas, I do not. I would love to see the end of copyright, and patents as a concept.
I firmly believe a world with less windows and more linux would be a wonderful place...
What I take from it is that he wants to provide freedom in general, not of any particular person.
If I have the freedom to take someone elses freedom away, then that is more freedom until I actually take someones freedom away.
To use an extreme example, if we had the freedom to own slaves you could argue that there where more freedoms available to us. But if you actually want to protect freedom you don't allow people to own slaves.
Nothing in GPL prevents monetization, this is one of the major myths. I assume your one of those people that believes it is impossible to make money off "free" code
Further it is not about developer not making money off his/her code, but not exploiting the work of others with and denying follow on developers the work of others.
Especially after clang showed the world how useful error messages look like.
And GCC has some serious limits when it comes to help other tools, because RMS (probably rightfully) fears that if GCC outputs to detailed backend information, people would start adding non-GPL software after it and build non-GPL compilers on top of GCC.
Which lead to the funny situation of people asking to include LLVM-based tools into Emacs, because LLVM allows to build code-analysis tools GCC just doesn't provide the info for.
IMHO clang's success and maybe even existence is clear result of GCCs restrictions, but removing these restrictions would undermine central principles.
And both compilers are better now than they would be if the other didn't exist, so good for everyone I guess? It doesn't seem possible to reach all goals with just one of them.
That's not really true. Perhaps they can't make changes themselves, but they can ask others to do so, or hire someone.
Mozilla Science, for example, tries to pair up researchers who have ideas together with developers who can code to build open-source projects. Despite the fact that the researchers may not be able to code, the fact that they have the right to share and modify the code is important.
That is not true at all, I know many non-developers that will compile source code themselves, one that is done often would be FFMPEG where you have several compile flags that will make the software preform differently based on the options you select, This is an example of Source Code being useful to non-developers.
Compiling source code you don't comprehend and aren't capable of rewriting isn't really an exercise of freedom. You could just as well distribute a closed binary with runtime configuration options, for all it matters in that case.
if you're implying that everyone who usefully modifies source code or compiles with options is able to write the code from scratch, i would advise you rethink your position.
to put a finer point on it, the concept of "layers of abstraction" is fundamental to computers and software. i can know quite a bit about adjusting the pieces of a program without having the ability to build it as effectively as i modify it.
Not from scratch, perhaps, but it still takes technical acumen to do anything useful with the source code at all, which implies that a non-technical user can't really exercise the freedoms of free software, beyond simply having access to the source code.
> RMS wants user freedom, not developer freedom That is the point people always miss.
Or we reject his attempt to lay exclusive claim to a word as all-encompassing as "freedom" and to co-opt it to his personal pet ends. You know, either/or.
Yeah, whether I agree with it or not, I'm not suggesting he doesn't have a reasonable point to make. I'm just saying he's an asshole, and to some extent, that is working against his own goals.
RMS is a politician, dude; calm down. Let's look at the preceding sentences:
> The Clang and LLVM developers reach different conclusions from ours
because they do not share our values and goals. They object to the
measures we have taken to defend freedom because they see the
inconvenience of them and do not recognize (or don't care about) the
need for them. I would guess they describe their work as "open
source" and do not talk about freedom.
That's much more nuanced than "anything that isn't GNU isn't free". He thinks that their license is a lot different, because he has a particular definition of "free" that he's championed for decades. Putting "and this is what I mean by 'free'" into every statement would be more PR-friendly, but redundant for... anyone who's heard RMS speak or breathe for 30 years.
If anything, this is a reasonable statement from a politician: "The Clang and LLVM developers reach different conclusions from ours because they do not share our values and goals." Maybe save the "what an asshole" comment for if he ever says, "LLVM hates freedom, GNU rules, BSD drools."
While I see where you're coming from, and I don't agree with the way you've framed this. That idiosyncratic and generally unshared definition of "freedom" does, to me, make him (and his devotees) at least a little shitty when they attempt to claim moral superiority based on it.
I feel the same way about other religionists who assert judgment based on their own pet beliefs, it's not an FSF-specific thing.
It can be effective, but you need a much, much stronger kind of charisma and a better grasp of rhetoric than Stallman or your typical FSF zealot. When delivered by somebody who gets spittle-flecked about this stuff, it makes you crazy and a jerk, instead of just a jerk.
No words like "free" or "freedom" appear anywhere on the http://llvm.org/ homepage. The license merely says "free of charge" without any kind of rationale. Certainly there's no GNU-style manifesto explaining their carefully-considered views on the rights a software user ought to have.
Think what you like about whether they should talk about freedom, but we can easily confirm that they don't.
Similarly, no words like "freedom" appear on the home pages of many popular GPL-licensed projects. That doesn't mean that, "Mono doesn't talk about freedom". It doesn't mean that, "Python doesn't talk about freedom".
It means they didn't use the word, "freedom" on their home page. OpenBSD people care about and talk about freedom, I assure you. They have a track record of ejecting software with insufficiently free licenses from inclusion their project.
FreeBSD has switched to Clang because, among other reasons, they consider it freer. You and RMS may not agree that that it is indeed freer, but it is obnoxious and demonstrably untrue to say they, "don't talk about freedom".
Python is not licensed under the GPL, and it is not clear to me that they care about this sort of thing.
As for Mono, I was there at the conception of the project, and can thereby tell you off the top of my head that if you go to the History section of the documentation they have an archived copy of the premise, and it is dropping with "free as in freedom" language.
> The Clang and LLVM developers reach different conclusions from ours because they do not share our values and goals. They object to the measures we have taken to defend freedom because they see the inconvenience of them and do not recognize (or don't care about) the need for them. I would guess they describe their work as "open source" and do not talk about freedom.
If you're objecting to that, well, where do they talk about freedom? I'm saying rms appears to be correct because I can't find anything they have to say about the issue. I searched more than the homepage.
I get that you don't like what rms observed about the Clang and LLVM developers, but you haven't cited anything from them that would show him to be mistaken.
RMS uses "freedom" to mean "the best thing you can have", kind of like, but also kind of opposite to, the way people use "freedom" to describe America. It's just propaganda, who cares. Many people view permissive licenses as more free, whatever, it's a buzzword.
RMS' use of "free" is very well defined in terms of software (see: https://www.gnu.org/philosophy/free-sw.html). You are arguing his definition of free, but that isn't up for debate here. RMS may be pedantic, but I'd disagree that makes him an asshole.
IMHO claiming that a multi-faceted word like "freedom" is "trade-marked" to only mean one specific kind of freedom is a sure way to provoke misunderstandings and conflicts. Especially since it is generally seen as a good thing, so claiming other people don't respect it (because they mean something different when they use it) is going to piss them off.
See: every discussion of GPL and "freedom" in a public forum ever. One might think everybody is sick of these arguments already, especially since they are so predictable, but apparently it always stings some people enough.
RMS saying that other developers have a different concept of software freedom is reasonable. RMS saying that the difference is important, and his idea of software freedom is reasonable and worth debating. Saying that other developers, "don't talk about freedom" because they use a different license is first class assholitry.
It's more reasonable that your stance here. You can quibble about how to define freedom, but given that RMS includes freedom from future vendor lock-in, it's entirely reasonable for him to say that other developers don't talk about freedom (as he means it) - because they don't. It's not hypothetical - BSD-derived code has been used in closed-source operating systems (both windows and notably macos) and lots of other software.
By using BSD over GPL you're choosing to make it easy for many people to use your code, even if that use eventually may support closed-source lock-in. Since most developers dabble in open source mostly as a prestige project, that's a reasonable choice - you're not doing it to prevent software monopolies or lock in, are you? So no, you don't care about freedom, exactly as RMS says. Most people are probably doing it for recognition by their peers and because they're interested in solving the problem at hand, and compromising on those two points for a slim chance that you might contribute to a more pervasively free software future sounds like a risky bet.
None of the two minor projects I wrote and support use (L)GPL.
> it's entirely reasonable for him to say that other developers don't talk about freedom (as he means it) - because they don't.
This is complete nonsense. Clang is replacing GCC in many BSD-licensed projects exactly because they feel it is more free than GCC. You don't have to agree that they are right, but saying they aren't talking about it is obnoxious and false.
No it's just his same argument as ever. The GPL specifically addresses RMS' issues of software freedom. Choosing a different license ignores the topic of software freedom as defined by the FSF. That's all he is saying.
Look the GPL is a hard choice. It is very unfriendly to a developer who is looking to make a living on their code, since it makes selling that code as an artifact. It's not impossible but much more complex than if you just have a EULA.
That said if you remove that difficulty you neuter the GPL and weaken the point of software freedom.
I get why RMS pisses people off. He doesn't keep arguing for using the GPL to be an asshole, but because he thinks software users are more important than software developers. Perhaps most developers feel differently?
> Look the GPL is a hard choice. It is very unfriendly to a developer who is looking to make a living on their code, since it makes selling that code as an artifact.
It seems like you accidentally a word somewhere in here, but know that you can charge ANY price you want for compiled software distributed under the terms of the GPL. Your only requirement is to -upon request from one to whom you have distributed your software- provide the source code for your software to that person.
Fulfilling the request with patches against other code is acceptable. Requiring a snail-mailed request is acceptable. Charging your actual direct costs for fulfilling the request is acceptable.
Selling GPL'd software isn't more complicated than selling non-GPL'd software.
Yeah. iPhone on a train is not an ideal way to have an Internet argument.
My point was it complicates the business model of selling software artifacts since the GPL does not prevent downstream dissemination of the source so long as the GPL is not violated. So the fairly common business of selling a binary blob of functionality that ceases to legally be useful after the license expires falls down with GPL licensed software.
I suppose one might argue the GPL holds hostage a commercial software ecosystem that favors software sellers over software buyers. But well that's really the point.
And I think that's what angers people. It's certainly the impetus behind Open Source.
> And I think that [a commercial software ecosystem that favors software sellers over software buyers is] what angers people. It's certainly the impetus behind Open Source.
If I've correctly characterized your statement, I can't agree with it.
RMS was originally motivated by his sudden inability to get access to the code for the driver for their computer lab's fancy laser printer to fix an embarrassingly bad bug that was wasting a bunch of other people's time. Many people prefer open source software because the guaranteed availability of source code helps to keep software that BigCos would no longer maintain (for whatever reason) alive, or dramatically improve software that companies can't be arsed to get right. [0]
> So the fairly common business of selling a binary blob of functionality that ceases to legally be useful after the license expires falls down with GPL licensed software.
Eh? AFAICT, the model still stands.
If you've written everything in house, dual-license it. If you've not, and if you're writing software for BigCos, [1] put a clause in your support contract that stipulates that for the next X years, $CLIENT will upgrade to $LATEST_VERSION after $REASONABLE_TIME_PERIOD and will not use older versions, or else suffer termination of the support contract and monetary penalties.
I mean, lots of people pay for (and keep upgrading) RHEL rather than use Centos or Scientific Linux.
[0] Ferinstance, DD-WRT and OpenWRT have been really, really good for the state of home WiFi router security.
[1] Which -frankly- is the only place I would expect to find such a license.
What does DRM support in a compiler tool chain even mean? The ability to develop DRM is available by the nature of the tool, but that is also true of GCC.
Maybe I'm misunderstanding, but isn't that what the Apple App Store has been doing since it was introduced? As I recall, Xcode used GCC back then, so how is Clang relevant?
Compiler bugs are usually limited to a single compilation unit and are usually submitted by doing a binary search on a pre processed source file. This is certainly true for closed source as well - just redact the (few) symbol names and create something that will compile to an a.out that demonstrates the problem.
God saves a puppy anytime someone filed a compiler bug with a trivial test case that demonstrates the problem.
> God saves a puppy anytime someone filed a compiler bug with a trivial test case that demonstrates the problem.
The current (r10e) shipping version of clang cannot return 128bit floats from a function.
#include <iostream>
int main(int c, char **v) {
std::stringstream("1.5");
double x; s >> x;
return x == 1.5 ? 0 : 1;
}
EDIT: You don't need "long double" in the above. double (or float) will show the problem. The standard library converts all floating point numbers to long double internally.
I'd be wary of concluding that "if it works on Apple Clang, it will work on all Clangs". I've seen this to be untrue in multiple cases and the root cause can be found in a multitude of locations. LLVM, Clang, libc (glibc, musl, MSVC, Apple libC,), C++ STL (libstdc++, libc++, MSVC), compiler-rt, platform ABI (Apple ABI's are pretty much SysV, except when they aren't...), and of course the actual version/branch of all of these things. It is certainly a start but unfortunately an over-generalization. :)
Depends on the bug, it could be that it's only triggered as a result of the compiler getting in a certain state by processing a bunch of code leading up to the problem area.
GCC could have provided a library that made it easy to do code generation or compiler development. That library could still have used the GPL, so that only Free Software could use it. Some of the work that has happened around LLVM wouldn't have happened around such a library, but some of it would have. (GCC has much more recently started down the "plugin" route, but too little too late.)
For instance, Swift might not have happened. But Rust may well have; a rustc compiler under GPL would not have broken anyone's ability to use it, any more than GCC's GPL prevents people from building proprietary software with it.
But instead, the GNU project chose to make it difficult to build any extension to GCC outside of GCC's own codebase. So instead of having an easy-to-use library provided by GCC under GPL, which might have furthered many of the GNU project's goals and encouraged more Free Software compiler development, they forced the development of an entirely separate compiler project to provide such a library. And the people who came along to do so, since they had to rewrite it anyway, chose to do so under a permissive license.
So, at least in hindsight, the Free Software world is worse off than it would have been had GCC provided a library infrastructure.