Imagine being a Smalltalker and still missing the underlying context of Joe's article - specifically, around the actor model (as used extensively in Erlang) being substantially closer to OOP as intended by e.g. Smalltalk than to the popularized inheritance-heavy brand of "OOP".
You can see this in the objections:
> Objection 1 - Data structure and functions should not be bound together
...and yet the usual way to store data in Erlang is to make it an argument of a function repeatedly and recursively calling itself.
> Objection 2 - Everything has to be an object.
...which is the case in Erlang/OTP, when you consider that everything is a process a.k.a. actor a.k.a. object. And just like with Smalltalk, these objects communicate via messages.
> Objection 3 - In an OOPL data type definitions are spread out all over the place.
...as they are in Erlang, because (barring some bolted-on approaches like records) data type definitions exist entirely as patterns to be matched.
> Objection 4 - Objects have private state.
...as do Erlang's processes.
That is, while maybe Joe didn't pick up on it at the time (per comments on that article: https://news.ycombinator.com/item?id=26586829), it's clear that said article was more a complaint against OOP as popularly envisioned, with all its classes and inheritance and such - not against the notion of object orientation entirely, which Erlang happens to implement, even if it might've been by accident.
Joe himself once said on stage (it was a video with him and Kay on stage, not sure of the name), that Erlang is either the most OOP lang or the least. I think he meant, that it depends on what OOP you look at. If you look at Smalltalk and message passing, as Kay highlights as the prominent ideas, then it might be the most OOP (or somewhere close to), while it is a 180 from mainstream OOP, when we look at the actor system and how distributed systems work.
There is a function/type duality that goes very deep into foundations of mathematics, but it's a conceptually different one than the one proposed in OOP. So I don't think interpreting all those similarities as being OOP is quite correct.
A small nitpick, Erlang doesn't implement the Actor model. The similarity of Erlang's processes and the actors, defined by Carl Hewitt, is accidental as stated by Joe Armstrong multiple times.
Why not? That's literally the whole point of the actor model; actors are definitionally units of concurrent computation. That's exactly why Hewitt based the actor model on how e.g. particles interact in physics - those interactions necessarily being concurrent.
That is:
> The similarity of Erlang's processes and the actors, defined by Carl Hewitt, is accidental as stated by Joe Armstrong multiple times.
Just because processes accidentally behave like actors doesn't mean they ain't actors.
Is that an actual law, or is it just that some implementations of the pattern are deficient? My impression is that the actor model was originally envisioned as being applied to a machine with hundreds or thousands of processors.
This feels a bit like a distinction without a difference. If 2 people come up on the same idea without knowledge of the other, that doesn't necessarily make them different ideas.
I think the conversation around OO suffers from the same problem that any major engineering trend suffers from: namely that eventually, the concept gets conflated with the way that enterprise software and overpaid consultants completely fudge the implementation of the concepts. Consultants get paid big bucks trying to convince your company that you need to be doing microservices or OO or nosql or whatever the latest fad is and that you need them to help you implement it. It’s not based on any real technical need. It’s just institutional FOMO.
OO has it’s place in the space of possible patterns to choose from based on the kind of solution you need. I think these overarching claims of superiority are missing the point completely. I personally like a combination of function and OO concepts that can work synergistically together and play the kind of role that they excel in individually.
Off topic I know, but it is not necessary enterprise consultants who push for buzzword-compliant fads. Individual developers generally likes to work on exciting new stuff and might also consider what looks good on the CV. Developer driven startups certainly seem to be just as fad-driven compared to enterprises which are generally more conservative.
That said, it is an important observation that widely used technologies are judged on the reality of their use, while less popular technologies are judged on their potential. For example Java is judged on the quality of the code people see in the real world, developed and maintained over long time by developers of varying skill, while Haskell is judged on examples written by experts to showcase the benefits of the language.
These discussions tend to boil down to OOP vs FP. That's a false dichotomy. The alternative to OOP is not FP, it's pre-OOP imperative programming with various techniques a la carte.
Should data and functions be together? That's a choice you can make on a case-by-case basis. It makes total sense that a HashMap has an insert() method. On the other hand, maybe your Player object is just some data which is interpreted by a PhysicsSystem and MovementSystem etc. Maybe it's clearer that all the physics calculations are in a common PhysicsSystem rather than being spread out into several implementing classes.
Should you use a switch-statement or a polymorphic interface? If you're traversing a graph with a fixed set of nodes, or you need to change all systems when new nodes arrives, the (exhaustive) switch may be preferable. I'd certainly prefer it to the visitor pattern.
If you can extract a calculation into a pure function, then that's great. If you're doing something inherently stateful, FP techniques are probably not too great. Maybe you can afford a high-level functional interface, but keep the implementation stateful.
The problem with OOP, FP, etc. is not that the techniqures are BAD, it's that all techniques are trade-offs. We're very good at talking about the advantages of some technique, but seem always forget the costs associated with it.
My problem with object oriented programming it's not the object part, it's the oriented part. In object oriented programming, there is, usually, no trade-off. It's objects all the way down.
The problem is that not many things actually need to be an object. But when something needs to be one, there is no distinction. No way to tell whether somebody made a thing an object, because it makes sense or just because that's the only way he could make it. So you have all those User/DateFormatter/Encoding objects for no reason, but language limitation. And then, on top of that, when you want an actual object, something that does its thing, have internal state and an interface to communicate (not a method call), you are on your own.
Let's take static void main as example. It's a series of operation called at the beginning (at least during my era), does not need to be in a class. The very reason we put it on class (let's say) Start, feels like language limitation.
The same with class Math, such as Math.floor and Math.ceil. Is Math a class or is it more suitable as a namespace? Since no object instantiated from Math is required (and maybe the same with static class).
However I don't think OOP is bad or limited language, beside the static void main part.
> The same with class Math, such as Math.floor and Math.ceil.
I still want to group them somehow and this "somehow" better have name "Math" so other people would see what it is.
Why are people so attached to name "class". Call it "namespace with state capabilities" if you're annoyed by "class" so much. Or just use namespaces like in C#. You still want namespaces, that's the point.
And speaking about your example. More often than not, we've got quite a bit of functionality attached to program startup and I'd rather have it focused in one single module "Start"/"Application"/"Runner" whatever you want to call it but I want it to be _focused_ and easily searchable, not floating in a see of function.
And by the way, same argument can be made about functions, why do I have to create "main()" at all, why not just start writing instructions like in bash script or python?
> Why are people so attached to name "class". Call it "namespace with state capabilities" if you're annoyed by "class" so much.
The central, first-class “namespace with state capabilities” in OOP is an object. A class is a namespace with state capabilities that may (or may not, because pure static classes are a thing) be an object factory, may or may not be an object or first class entity, and, probably, if the language has static types, is also a type.
And all of that is also true of objects, except the last, and objects are always first class in OOP. So, aside from static typing, classes are superfluous, unless they are just a name for objects with certain common feature.
I love this take. Programming techniques are that, tools on our belt. One should neither fall in love with a particular one, nor try to use it exclusively. Either it fits, meaning it provides advantages that outweigh the added complexity, or it doesn't.
An entire generation of programmers were brought up on languages which took an orthodox view on this (Java, C#, though they're both better at it now).
Add to this all the books and seminars or OOP, patterns, etc. and in some circles it's impossible not to code in the strict "break everything apart" OOP style
The meaningful alternative to OOP is modular, object-based programming based on composition instead of inheritance. That encompasses both "pre-OOP" and "post-OOP", mainly functional-based idioms. Simple imperative programming does not really scale to larger software systems, modularity and compositional thinking are essential.
Your statement seems to imply that imperative programming does not have modularity. In my experience, I have worked on programs written before OOP was in vogue, in imperative languages, that I consider to have had good modular design. These programs had millions of lines of code and I felt they were easy to understand and modify.
What do you mean when you say that imperative programs lacked modularity?
Modularity in imperative programming is ad hoc. It's not part of the paradigm itself like in modern functional programming, or object-based programming with interfaces and composition.
> These discussions tend to boil down to OOP vs FP. That's a false dichotomy. The alternative to OOP is not FP, it's pre-OOP imperative.
Thank you for raising this oft ignored fact. I remember graduating from C to C++ in order to hack on SGI’s OpenGL in 1991. Possibly graphics and scene-graphs are a sweet spot for OOP, but I found the approach very elegant and quite effective.
OOP, as a paradigm (it is not a “tool”), is one of the handful of viable approaches for organizing and writing code. Given that these are paradigms, it is natural that there are those who have a natural mental affinity for one and abhorrence for another. In my view, the only merit in these comparative critiques is when contextualized within a specific, e.g. UI development, problem space.
(Also, Joe (RIP) is indeed wrong. OOP doesn’t suck, it bloows. /g)
A paricular point people don't get is that Java joined functional programming instead of beating it.
The Hotspot compiler was based on a research runtime for a functional language. Type inference is basically the same as ML family languages. With heavy use of Lambdas and higher-order functions some of my Java looks like ML. If i want to have functions like
maybe(x, f1, f2, ...)
that works (emphasis) like the maybe monad my codegen writes the boilerplate out to arity 30. (Funny the codegen lets you write Java in a Java DSL that looks like typed S-expressions.)
My POV ("Writing functional stuff in Java is a headache") gets lost between the much larger camps of "Functional bad, Java good" and "Java is functional and therefore good". Or worse "Java is better because it's both OO and FP".
I want to type inference to work as well as ML. I don't think it ever will.
I want to be free of nulls. I don't think I ever will be.
I want the language to just let me implement `interface Monad<A>` and not need to generate code to arity 30.
>The alternative to OOP is not FP, it's pre-OOP imperative programming with various techniques a la carte.
The alternative to an object is an abstract data type; both are data abstraction techniques that hides the data under a set of operations. The main difference is that an object includes an interface while an ADT does not. Nothing prevent an object interface to be functional nor an ADT set of operations to be procedural.
The thing I find interesting about revisiting these old pieces is that the debate about the merits of OOP is effectively stymied. This post and the article it replied to posted yesterday (https://news.ycombinator.com/item?id=26586829) could effectively have been written today. And the debate in the comments tends to be pretty formulaic as well. It's one of those perennial topics that generates a lot of heat from rehashing tired arguments without generating much light in terms of new thoughts.
There are a bunch of topics like this, in my own area you see the same four or five topics come up repeatedly with nothing new or interesting said on them.
I'm kind of intrigued if anyone has any thoughts on how to shock life back into these zombie debates?
I think the core issue is that OO is one of the first “solve everything” paradigms most programmers learn. Lots of good programmers I know (myself included) have spent a year or two of our careers convinced OO is the solution to every programming problem ever, because it feels so powerful. Some programmers never grow out of that phase.
And that means there’s an OO shaped standing wave in the pool of programmers. Each year the OO wave is made up of different people, but the wave itself is always there because there’s always a new generation of programmers going through their baptism of classes. We have to keep reposting Joe Armstrong’s rant because the sooner people learn non-OO paradigms, the better our field is for everyone.
These debates only feel like zombies to some of us because we’re old, and we’ve yelled at the kids for overusing classes on our lawns plenty of times already. There are some other interesting conversations - like, where is the line between when you would use OO or bare structs or FP? But those conversations are fuzzy and nuanced, and hard to have without real code examples.
>...those conversations are fuzzy and nuanced, and hard to have without real code examples.
This rings true to me. I've dabbled in several languages but started out with Java and for the longest time its style of OOP was how I approached building systems.
In the last 2 years though, I've had the opportunity to work with Typescript and it's been really nice choosing what approach you want to take (structs, functions, or objects). I've found that starting off with plain structs and functions has let our systems grow in a way that would have been constrained by Classifying things up front -- you really _do_ learn more about your application as you're writing them, so why paint yourself into a corner by dictating what data and operations have to go hand in hand upfront?
> you see the same four or five topics come up repeatedly with nothing new or interesting said on them.
Thousands of humans are born every day, and by the time they get to be programmers they must have learned about all these topics. "Coming up repeatedly to them" is necessary for that. Education is one of most fundamental activities of mankind, and it is based on having the same discussions again, and again, and again. You are not supposed to participate on all these instances of the same discussion, but criticizing them for being repetitive is absurd. Most of the time there is really nothing new to say.
I don't think criticizing the repetitive nature is absurd but maybe it's fruitless. My meta point isn't really to make people feel bad about taking part in these debates but to question whether we can get more out of them.
And whilst I agree that new people learning about things can be an effective trigger for rehashing these arguments I don't think all the people who so happily engage in them are new to the argument. In fact I think the constant rehashing of them is mostly the cause of new people learning about the argument in the first place. So in that sense they could be useful (if the argument is actually useful).
IMO this is an example where both parties can be right and wrong at the same time. "Correctness superposition" if you will. I have come to think of programming languages themselves a bit like art. Take fiction for example; Arguing about the "provably best way" the thing the programmer is typing should look is a bit like arguing which writer is the greatest. Everyone's going to have an opinion and the only thing the majority will agree about is when an author is really terrible.
When I started out programming it was all straightforward procedural languages, working up to C eventually. It took me a bit to figure out why OO was this big new shiny thing everyone loved when I first started to learn C++ and Java. Eventually I understood that the strength was around having functions that manipulate the state tied directly to the struct carrying that state was handy and saved a lot of type checks and variable passing for the programmer.
Java and C# are good examples of languages where there's a straightforward admission that everything can't be OO. Static methods are pretty non-OO in their conception but even if you wanted to argue they fit, how many projects eventually wind up with a `public static class Utils` or end up shoveling in extension methods in C# to accomplish some common task that in C would just be an include and then calling the function directly.
Why would you want to revisit these debates, having progressed beyond them? :)
Instead of rehashing "Is OOP good, actually?", we should explore the boundaries of the paradigm by making a popular language that's based on prototypes, or that privileges composition over inheritance, or pulls apart the various pieces of OOP into standalone concepts that can be mixed and matched. If we get inspired, maybe come up with some new pattern or language affordance that's similarly groundbreaking to functions, objects, classes, message-passing, threads, or whatever. Maybe the obviously-better development paradigm of 2050 hasn't been articulated, yet.
> Why would you want to revisit these debates, having progressed beyond them? :)
A moment of zen like grumpiness where I was motivated to complain about it.
And yeah I totally agree that the PL grudge match is better solved by experimenting with paradigms than drawing battle lines on a forum. I was aiming at being even more meta though.
> we should explore the boundaries of the paradigm by making a popular language that's based on prototypes, or that privileges composition over inheritance, or pulls apart the various pieces of OOP into standalone concepts that can be mixed and matched.
I can neither confirm nor deny that I had any particular languages in mind... but, yeah, having slightly or wildly different takes on various patterns can help us explore the practical consequences. I would suggest that the history of JS is good evidence that prototypes are harder to use and reason about than classes.
What I've found with all these "paradigms" is that they're only beneficial some of the time, usually not most of the time, never all of the time. It takes some experience and and an anti-dogmatic attitude to judge, for example, when a black box is a good choice and when plain old data is a good choice. Woe to those who attempt to shoehorn their idea of purity unto every application domain they encounter.
This is what I wanted to write. There's a bunch of tools in programming, as in any field. There's no tool that's right for every job or wrong for every job.
Sounds boring, but judgement is what the work is about. When you're talking about a specific system you can say "I think we should use OO for this" because so-and-so. But each system needs its own explanation.
I can see that Erlang and Joe's (RIP) insights about OO might conflict with your view of the world, but IMHO that's just because of the OO indoctrination people grew up with.
Recommend learning key concepts of OTP/Erlang/Elixir first, and then revisit your initial opinions.
Much of the debate or criticism of OO have been targetted at the theoretical side of object orientation, and perhaps less towards the practical implementations, which allows for some flexibility in the programmers approach to objects.
I would like to see someone do a language that enforces all of the OO paradigms and best practices on the programmer, just to see what the resulting code would look like and behave.
For instance, I'd love to see a compiler than throws a error if you violate the single responsibility principle. Much of the OO spaghetti code I've seen have been because objects are basically just structs with methods and 10 different ways and object could change state.
For practical purposes I doubt I'd ever use such a strict language, but it would be interesting to see if the result is safer and clear code. I suspect it's more code though.
The problem is that the "OO paradigms and best practices" are not formally defined, so the program that would enforce them on you cannot be written. OOP was unfortunately never defined formally enough to be amenable to this approach. (For example there are different interpretations of Liskov substitution principle.)
There are strongly-typed functional languages that are based on a consistent formal foundation, like Haskell, so you can look into these. But there are still some "escape hatches", because sometimes you just need them.
There's a fundamental problem with this kind of debates. Because we don't have a lot of empirical data and because human interaction is complex and often unintuitive, we don't have a good theoretical model of how the development and maintenance process actually works, the arguments tend to be based, in the best case, on unrepresentative and biased samples, and in the worst case, on personal aesthetic preference. And when we look at empirical data in the field, obviously whichever the dominant paradigm is, it will show the actual effect of programmers and shops at wildly different skill levels using it, while the non-dominant paradigm won't be able to show success at doing things better because, well, it doesn't have enough people at different skill levels using it.
Here's my rebuttal of this rebuttal. For the record, I think OOP is wrong in the sense it had some advantage over the procedural programming, but that has since been superseded by functional (and type/category theory) approaches.
In some sense, OOP tends to conflate too many concepts into a single one, a class (or an object), to the point where it is harming the clarity of the code.
Ad objection #1: The post doesn't explain, why would you want to model the "natural clustering" of functions on pieces of data, why is not just adding an accidental complexity. Clusters of function and data types do tend to happen, so what? (In fact, they tend to happen differently for functions and data types, which makes encapsulation problematic.)
Ad objection #2: This has not been refuted.
Ad objection #3: I think this is pretty much a complaint about subtyping polymorphism, which yes tends to be a problem (AFAIK nobody really gave a good semantics to it). For example, a GUI container element needs to define a type of it's elements so it could define how to deal with them. It's difficult to use composition there, because you need to define certain functions on those elements, and many classic OOP languages (Java) do not let you add an interface to an existing class. Parametric polymorphism (and its enhancements like type classes) solves this problem much better, IMHO.
Ad objection #4: One of the problems of OOP is that modules also tend to be modeled with classes (see for example public/private access controls). But modules are a third distinct useful category of things, aside from functions and data types. OOP conflates these things into a single entity, an object (or a class).
I found in practice, much more useful is to have modules as a separate concept, and explicit import/export controls on the module boundaries. That is, do not tie access controls to functions or data types themselves. This makes modularization easier, because there is an additional layer (module definition) where you can override existing exports and imports.
A problem with this kind of debate is that e.g. in their case the author is using quite a good OOP language which is hardly used by anyone.
Naturally if I criticize OOP it will be what is a currently used, not what might have been.
Also it is uncharitable. If you have experienced problems with OOP, then you know exactly what Armstrong is talking about.
And ironically many have said that Erlang is perhaps the most OOP oriented language if you follow the original idea of Alan Kay which was centered around message passing.
The OOP that dominates today originates with Simula and is part of another OO tradition.
> The OOP that dominates today originates with Simula and is part of another OO tradition.
Simula (and early Smalltalk) were an inspiration for Hewitt's Actor model, which directly influenced both Scheme and Erlang. They all have Simula as a fairly influential ancestor.
I'd place the origin of the OO that dominates today with C++ and continuing through Java and C#.
When I started programming in Pascal in the early 90s, my procedures/functions and my data were decoupled, spread all over the place. Then I started learning (Turbo)Pascal with OOP. It was godsend. All of a sudden I could structure my code in a way I wasn’t able to do before.
Now thirty years later I know how to structure my (Python) code without OO: Via modules. I hardly use OO anymore. And I am drawn to FP more and more. But I see cases where OO is useful.
I don‘t think it‘s either/or. We should be glad we have all sorts of paradigms to go back to.
I'm a recent FP convert after 15ish years of OO. I don't hate OO now or anything, but maybe a little? It adds so much more to think about for what I now consider to be no benefit, at least for the types of systems I've worked on in my career. A big thing I've realized is that everything I've worked on has modelled things that are already abstract. For example, I'm working on a scheduling app right now. In real life, a schedule is an abstract concept realized on paper. Our app certainly doesn't have a Paper class. That may seem knit-picky to some people but it's true: the whole "model the real world" breaks down there. My apps also always have way more behaviour than they do things. FP is much easier to model a lot of behaviour around a generally fixed set of things than OO is.
This is not to say there aren't good use-cases for OO (video games come to mind) but for my day-to-tasks, I've found FP much simpler (and immutability is a god-send... it's like, "Hey, you know that foot gun you've been carrying around? You don't need that anymore!"). I would encourage any OO zealots to give FP a honest try.
> It adds so much more to think about for what I now consider to be no benefit
That's my problem with OO too. It encourages people to add unnecessary mutable state to programs and then they have to think about it interacting with everything else and with itself.
When you realize most systems are actually pretty simple once you look at the fundamental data and how it changes and ignore all the architecture built around it - it's a very freeing experience.
> This is not to say there aren't good use-cases for OO (video games come to mind)
Gamedev is currently moving away from traditional OO towards Entity Component System - which is basically a in-memory relational database for your data + imperative code put into classes that are basically modules :)
> For example, I'm working on a scheduling app right now. In real life, a schedule is an abstract concept realized on paper. Our app certainly doesn't have a Paper class.
I am hoping that your application has a function for UnscheduledWorld -> ScheduledWorld.
> I would encourage any OO zealots to give FP a honest try.
Why assume that people who like OO more than FP do so only because they haven't tried FP? Isn't it possible that you are in the honeymoon stage with FP, and as you gain more experience with it (like you have with OO), "the scales will fall off your eyes" as P G Wodehouse would have put it?
I'm surprised that the original anti-Object Oriented Programming article says "data structures and functions should not be bound together" since in functional programming (if we take that as the extreme opposite of OO) they are still kinda bound together. Like if you have a JS array ['cricket', 'football', 'badminton'] it is both data and almost like a function at the same time, since you can iterate through the items and make code execute for each
For example in a webpage if there is a div id=football then you can select it using:
In my experience, when you have a field with two popular but different options and a lot of strong opinions about which is good and which is crap, neither side is right. Both sides have merit, otherwise they wouldn’t have a following. Examples: functional vs object-oriented programming, Ruby vs Python, Mac vs Windows, AC vs DC, rock vs pointy stick.
To OO or not OO seems like more of a religious preference about functional containment: either at the module level or at the type level. This is why neither seems to satisfy everyone and why there are flame wars about it.
On one side there is loose coupling between types and functions, and the other side has strong coupling. On one side, types can be composed simply and the other involves inheritance, overloading, and polymorphic rules.
Go, Haskell, and Idris are arguable examples of good type systems whereas languages like Python and Erlang have weak type systems. I wouldn't wish C++ on my worst enemy, but that's a religious preference not shared by all. Pascal without OO had a nice type system.
I find OO can wind up in a complex mess. If I have to look into 4 levels of inheritance I'm very lost. Encapsulation is great if your encapsulated black boxes function as they should. However that is often not the case, or at least you need to dig into that black box to understand the system.
Mixing data structures and models is something I find difficult. It can be hard to follow the logic of a simple function if it draws on several functions hidden inside an object. They same can be true of function composition, but many functional approaches allow you to pipe data through a series of transforms. This is hard to do when the objects have logic hidden within them.
That's a pretty good insight: FP encourages working with simpler data types, whereas OO encourages encapsulating everything into these complex types which become unbearable as they get deeply nested.
One problem with encapsulation is that it is used to protect modification of the internal state of an object against external modifications but with very little benefits. Languages which are more relaxed on this subject (Python in some extend, for example) have shown that programmers are, most of the time, just not stupid enough to mess with the internals of an objects and that all the ceremony of encapsulation brings little value.
In top of that, a much bigger problem is that encapsulation is fundamentally broken, it does not prevent against what a real source of bugs: concurrent accesses on the object.
The point of encapsulation is (1) to protect meaningful invariants of the internal state - you care about external modifications because those might break your invariants; (2) to abstract away an object's interface from how the internal state happens to be implemented. Sometimes there are meaningful choices to be had in implementation, and any allowance for "external" modifications basically adds to your object's interface and breaks that abstraction. Both of these can be useful. Concurrent access is yet another issue, of course, and newer languages make it easier to control it.
The promoters of object-oriented design sometimes sound like master woodworkers waiting for the beauty of the physical block of wood to reveal itself before they begin to work. “Oh, look; if I turn the wood this way, the grain flows along the angle of the seat at just the right angle, see?” Great, nice chair. But will you notice the grain when you’re sitting on it? And what about next time? Sometimes the thing that needs to be made is not hiding in any block of wood.
I don’t want to toot this horn too much (is that a saying?) but I really like the path Rust took.
It is not technically OO but close enough. For me as someone who learned programming with Java (and some SML) and mostly worked with OOPLs after, I never had problems whatsoever. You don’t have inheritance but traits (and importantly trait objects).
The ways to model in Rust with Enums, Structs and Traits are amazing and somehow really fit my mental model.
One issue I have with (pure) OOP is the way that methods are associated with a single type (and its subtypes). What about operations that don’t naturally belong to any one type?
For example, binary operator overloading. With unrelated classes `C` and `D`, should the implementation of `+` to handle `C.new() + D.new()` be added to `C` or `D`? Conversely, if both `C` and `D` define an implementation of `+`, which one should be preferred?
I think binary operator overloading is a bit of a stretch here and it usually has clear rules in whatever language you choose.
But the more general question of where to put `combine_with()` in `A.combine_with(B)` is on neither (and not necessarily on another object either).
Unfortunately, this is prevalent in OOP programming but it is simply bad architecture. Of course, with it being so common, it's natural to blame OOP programming, because—in a way—it lets people get away with it easily.
As others have pointed out, the most pragmatic approach is the most reasonable one: use functional paradigms where they make more sense, OO-ones where they do. When things stop making sense (as in your example), don't do it!
To best understand what things make sense, start doing TDD and you'll quickly learn to de-couple things that don't need to be coupled (even if you don't end up doing TDD all the time).
In C++, quoting cppreference.com: "Binary operators are typically implemented as non-members to maintain symmetry (for example, when adding a complex number and an integer, if operator+ is a member function of the complex type, then only complex+integer would compile, and not integer+complex)". This makes perfect sense.
In other languages, I'd say: perhaps implement the operator in the return type of the operation?
OOP, FP or procedural, or even good old fashioned, start the top, run to the bottom imperative scripting all suffer from human beings choosing the wrong paradigm, or implementing that paradigm poorly. OOP has been largely a force for good, and the modern FP movement has been great, too. Same goes for dynamic typing and strong, static typing. It's all good if applied to the right problem.
I only like this and the opposite post because now I know that even experts don't agree and I can feel good about using OOP when I feel like it is appropriate.
As a rookie it can be a tad jarring to read that OOP is an invention from the devil just to lure us into some hell later on. Now I know I can safely take such arguments with a grain of salt.
It’s all about your thinking process - the way you want to think when you solve a problem. It’s not about the programming language. You could use OOP even in C. And clearly there is no right or wrong here. As always, it depends, in some cases your problem is solved easily with OOP or FP or with imperative paradigm or any other.
Yes I oslo agree :
For me, the the OO programming gets me close to the design of the problem I try to solve.
I mean before even programming.
Then when I feel ready to program my solution, Actors become classes naturally.
I tend to believe OO langage simply gives us less work to design & program solutions.
After many years of a lot of OOP, many people have realized, that putting behavior and state together into one entity might not be a good idea. We can see modern languages moving away from such practices. For example in Clojure one has defmethod, which defined a method separately. In Rust one has structs and implements traits for them separately. In some languages OO systems are libraries, which are optional, opt-in, not part of the core language. Think GOOPS and CLOS. Usually those languages define functions for data structures separately and you will have something like vector-length.
So it seems wisdom is slowly sinking in and we are progressing away from bundling state and behavior together.
Why is this done? For example to avoid having to subclass loads of stuff. For example in Rust one uses composition over inheritance and one can add behavior later by implementing traits, that make use of the parts of the structs. This way one does not need to subclass everything to add a new behavior. Similar for defmethod in Clojure. With Erlang this is raised to a whole new level, by defining the behavior in separate actors, which process messages, which are the data, which travel around without the behavior part. Overall it makes it possible to actually have encapsulation. A similar thing happens when we leave the boundaries of one machine and look at the web and REST, where we basically pass messages, conceptually similar to what happens between actors. For details about how encapsulation is broken in mainstream OOP read the article "Goodbye, Object Oriented Programming" by Charles Scalfani: https://medium.com/@cscalfani/goodbye-object-oriented-progra... (Sorry for the medium link. Does anyone know a better link?)
Next I'll address some things in the article, which I see problematic and questionable.
> Putting them together creates a boundary around this group of data and functions - encapsulation. Sure, you may call that a “module” but an object is IMHO slightly different since objects have identity, a life cycle and we can hold them, send them around etc.
That is not, what a module usually is. A module usually groups things, which are independent from each other, but are used in the same context. For example a module could contain a definition of a math functions. It does not track state or data. It might contain data structure _definitions_, but not their instances. A module is not a class or object replacement.
> So in some sense objects can probably be viewed as modules but often more fine granular. And we can combine such “modules” into larger modules (since objects have identity and can be referenced etc) and we can create and kill them dynamically (life cycle).
See above for the differences of objects and modules. It is not about being "more fine granular". It is different in character and should be used in different scenarios.
I gave up after his comments on the first OO objection. This blatant ad hominem and sweeping generalizations pass for good criticism? Not in my book at least.