24 November 2014

I’ve come to the conclusion that believing in idealisms is wrong.

When I was in my first steps of learning Python, somehow I stumbled across a presentation by the name of Stop Writing Classes. I’m not even sure why I watched it. At the time I didn’t even know what classes really were. Actually, when it comes to Python I still don’t. But at the time it introduced a concept that I had never thought of before.

I discovered that developers, even high-level experts, can make bad decisions and mistakes. Before that, I always thought that programmers were perfect. I learned that programming doesn’t just require the skill to write code, but the skill to make decisions on how you write code.

To anyone who has moderate experience in software development, this might seem obvious or taken for granted, but I’ve found that beginners don’t learn about this until it is far too late, and the teachers rarely emphasize it. Many of those who are learning programming through a school aren’t taught about the ethics of programming. They aren’t taught how to design library API’s, how to separate concerns, or how to write small cohesive modules.

As a developer with a young career, I spend quite a lot of time debating with myself over what concepts and practices I should stick to. At one point I was convinced as I was going to be a full-stack JavaScript developer until the profession died. At another I was dead-set on using Ruby, and at yet another point I made the decision to only use Haskell.

Up until recently, all of those decisions were based on whatever seemed “ideal.” I wanted to use JavaScript for everything because a tumor in me thought that JavaScript was actually elegant and that I could find plenty of good jobs for it. I migrated back to Ruby for a short while because I had forgotten what I originally loved so much about it. I tried out so many different languages, architectures, frameworks and paradigms that I almost got sick of it all.

My realization is that I shouldn’t be choosing something because it feels ideal or at least optimal, nor should I choose it in favor of “shipping culture.” I should make decisions as a developer based on my personal experiences alone. If I want to judge something I’m not familiar with, then I’ll take the time and attempt to use it in a generous scenario.

On the note of software methodologies for teams, I think they are pointless, at least for me. By methodologies I mean things like TDD, Agile, programming paradigms, opinionated frameworks (“unopinionated” frameworks should just be called libraries). The problem I have with them is their meaning and significance vary between everyone. Some developers feel that princibles (methodologies, philosophies, whatever) like DRY, YAGNI, KISS, MVP, BDD, DDD, Ro3, etc. are absolute law. Other developers simply don’t care. Others pick and choose which apply to them.

This means that when I join a team of JavaScript developers, on a fundamental level, none of us will agree to use the same tools. Sure, we might make the decision to use Grunt over Gulp or whatever, but the consensus around using it is fairly diverse. One person on the team might be “that Haskell nerd” and wants to use a bunch of fancy “functional” libraries. Another person might have a serious Java background and wants to turn everything into classes but stick to idiomatic prototypal inheritance. Another comes from a Ruby background and wants to use CoffeeScript.

But we as a team agree to put these opinions aside, or we’re in the team because we share a specific set of opinions. To this point, I don’t trust methodologies and I don’t trust people to follow them. You could make the argument that practices like Agile or TDD can help a team improve, but again I don’t trust people to commit themselves to the same level I would. I couldn’t trust every developer on my team to take TDD as seriously as everyone else.

One day, when I get an office job and stay with a company for more than a month, I might have the chance to have complete trust with my team. But I’m not the kind of person to look up to people or trust them (unlike my newbie self years ago).

On that note, the personal problem I have with methodologies is that I simply can no longer trust them. There are too many contradictions, and they are too inconsistent to follow. I choose what to do based on what is logical, not what seems logical at first, nor what some bearded philosopher preaches. Back to my previous point, I choose what to follow based on the needs of the end user or me, instead of the needs of my team or the my “programming community.”

I hope this doesn’t make me seem extremely ignorant, but it probably will and I’m probably using the wrong terms to describe everything. I’m only writing this to give context to my decisions as a developer and to inform those who might need it. I don’t know if I should “love to hear your argument against this” but since this is a blog post from some obscure programmer on the internet, then I’m sure you’ll write it anyways.

There’s definitely a note to be made on how all these methodologies came into essence. At some point in time some developer had a manifestation about how they should write software and they wanted to preach it to those they had influence on. So a different topic I could’ve discussed would be about how you should stop preaching your revelations to those who are impressionable.