Friday, October 3, 2014

Object Oriented Programming Isn't Bad!

I was hanging out on Facebook today, when I came across a rant written by a friend of mine. My friend is an extremely talented programmer, artist, and overall just a gifted individual. But he recently came to the realization that he hates Object-Oriented Programming (OOP). So I ended up wasting an hour or two of my time ranting in disagreement.

My rantings aren't anything particularly special. The things I've said below have been said a thousand times before by people far smarter than me. The same also applies to arguments against everything I'm about to say. But these sorts of debates are very common amongst programmers, so I thought some of you all might enjoy reading if I shared :-)

So without further ado, here's what my friend wrote, and my response is what follows.
After years of c++ and object oriented programming, I have come to the ultimate conclusion how useless this paradigm is in the programming world. It's also made horrible by the fact that a lot of substandard programmers use it, to where it's much easier to run into software that is utter crap because of the object oriented style. Objected oriented design leads to very bad design choices that are riddled with abstractions and indirection that aren't very efficient and must be waded through to debug later in the future. This means that to do something efficiently you limit yourself to what is only available in procedural languages like C so there's no real reason to even use OOP at all. 
Basically, I came to realize that a lot of programmers out there do not get the concept of "pay only for what you use". Which, inherently, make them "bad" programmers. I say "bad" as in "they can't write code that is efficient". These programmers might come up with very nice (as in, pretty) pieces of software. They also might come up with very stable piece of software. But they can't come up with code that is at least as efficient than code written by the programmers who know exactly the advantages of C(procedural) over C++\C#\Java\Python. 
The main issue at hand is "quality of life". I now understand that there is a lot of programmers out there who are only "consumers". They consume compilers and languages (and, as an extend, processors) as they see fit for their convenience and not as they see fit for the efficiency of the final software. They only want to "get the job done", and do it in the most efficient manner for the number of lines they have to write, or the easiness of the code they write. 
But a "real" programmer will actually be aware of the generated code. They do not care about their quality of life, as long as the generated software is slim and efficient. These programmers will know exactly what kind of assembly code is going to be generated as they write the source code. They are perfectly aware of each and individual cost of their line of code in terms of generated code size and speed. And C is the only language that fully provides this ability. Linus original argument can be translated into "if you are a OOP advocate for the sole reason it's easier to use for some programmers, then I don't want you to contribute to git, because you're most likely to be not aware of code efficiency, and thus, you're most likely to contribute non-efficient code into a project where efficiency is the main selling point." 
Now, in an environment where the code is fully controlled, and where code contributors are fully respected and trusted, then OOP might be acceptable, because then one can trust them to do the right choices, and dodge the various pitfalls that OOP will present them. When you can't possibly fully review every and each contribution being made, because there's so many, then C presents the advantage of fencing yourself against common bad programming practices that Object-Oriented programmers tend to be plagued with.
I disagree. I think that OOP is a good thing. Same goes for parameterized types (aka templates or generics). The only real problem with OOP is that most programmers are idiots who think inheritance of implementation is a good idea. It’s actually quite evil. It's one of the most abominable acts a programmer can commit. Even among my most trusted colleagues, this stuff still happens, but they tend to do it a whole lot less than everyone else in the industry.

Do you religiously write unit tests? Because that's the main advantage of OOP. It's very difficult to write testable code without OOP. But you know what's even better than OOP? OOP + Dependency Injection. I think dependency injection is really the pinnacle of object-oriented development. And the best way to do DI-OOP is Dagger 2.o.

Testing is such an important part of programming. So much so, that I'd say if a particular paradigm makes testing better, then it's probably a good thing. However I'd like to attach a disclaimer that I'm not exactly advocating TDD. I think TDD is for zealots. The only time IMHO when tests should absolutely be written first is when you're fixing bugs. It's important to prove that what you think you're fixing is indeed what you're fixing before you fix it!

But one thing I must admit, even though I like OOP, I have a great deal of respect for how Linus Torvalds writes C applications. When he wrote Git, he broke it down into a zillion tiny little C programs—almost to the point where each C program could practically be considered a class—and then tested them using bourne shell. Brilliant. That's how coding gets done when you treat the kernel itself as your framework, rather than as a cesspool upon which all your other goofy fleeting frameworks float.

I've done a lot of cool stuff  but overall I'd consider myself a spec in Torvald's shadow. There are very few software engineers on this planet who are in the same league as him. I'd say Richard Stallman, Donald Knuth, Jeff Dean, Sanjay Ghemawat, and John Carmack are some of the very few people I'd place up there beside him.

Here's another important thing to consider: data structures. C programmers don't have a whole lot of quality data structure libraries at their disposal, because the C language makes it very difficult to write abstract code. On the other hand, C++ programmers have a whole lot of cool things, like sparsehash to choose from, and they can always count on these libraries to have remarkably similar APIs. The same cannot be said about C data structure libraries whose APIs are oftentimes incomprehensible (like Judy).

It's also worth mentioning that, due to the way memory behaves on modern computing architectures, the complexity of the assembly output for your code can be deceiving in terms of performance. Let's say you fine-tune your code to reduce the number of instructions that's required from like 100 to 10. A 3ghz CPU can perform many of its instructions, like adding two integers, in just 0.3ns, so you'd expect 27ns performance increase. But oh wait—somehow a main memory reference got thrown in there that didn't hit the cache, so now you got a 100ns penalty on a single instruction and you lose. Or maybe you reduced the number of instructions by using an O(n) algorithm instead of O(logn) and you need N main memory lookups, so you lose again.

These problems become more apparent when you're operating at scale. If you're dealing with several gigabytes of data in memory, choosing the theoretically optimal algorithms and data structures can have far more impact than writing the practically optimal code. Scale it up even further, where algorithmic iterations are no longer measured in opcodes—but rather in network roundtrips—and the difference between a line of crappy C++ versus its hand-tuned assembly equivalent just seems like ants.

But when it comes to code that doesn't have to operate on large data sets—like for example if you're writing a desktop operating system—tremendous performance benefits can really be had from writing hand-tuned code. Take for example MenuetOS which is written in FASM. The entire freakin' operating system and all its applications fits in L2 cache (!!!) If everything you're doing fits in <=L2 cache, then you can probably get away with crap algorithms for everything. It won't matter, because L2 lookups are an order of a magnitude faster than main memory, and L1 lookups are nearly as fast as the registers themselves.

But overall I endorse your brand of programmer elitism. There are just so many things lurking beneath the surface of well-written shiny code that every programmer needs to know in order to be a pro. We need to know how our code translates into machine code and how it executes on the system. We need to know the performance characteristics of the system. We need to know the upper limits for what the system is capable of doing. And we need to understand computer science.

If you’re frustrated about the silly layers of abstractions people create that we oftentimes find ourselves needing to purge, then perhaps your gripe is with the solipsistic programmer?

The solipsistic programmer (e.g. James Gosling) rejects reality and substitutes it with his own imagination. He sees computer systems as how he wants them to be, rather than how they actually are. He shuns existing tools that work and instead seeks to replace them with his own. He rejects standards, without even trying to understand them. He disregards style guides, when he isn't complaining about them. He cares not about performance, for it imposes an impediment to his idyllic design. He has no interest in the quality of the end-user product, for his goal is to impose his vision and ego upon others. Whenever he finds austere beauty in software design, he will immediately try to snuff it out with bloat and complexity.

I wouldn't be surprised if 99% of all software engineering effort in this country is wasted on egotistical men (architects) who keep designing new junk, and the poor innocent devs who have to cope with that junk. Eventually there becomes so many pieces of junk (platforms) that even more junk (cross-platform frameworks) needs to be created to cope with that junk.

I should also note that the solipsistic programmer's political equivalent is the progressive ;)