This weekend Cory tweeted a link to Dave Fancher’s great article on how F# not just helps but actually enforces clean code:
This turned into an interesting debate with Dave Fancher, Cory House, Darren Cauthon, Jay Harris and Randy Skopecek spanning over 100 replies regarding whether or not we should learn or adopt new technologies such as F#, and when and how to sell new tools and technologies to the company that you work for.
Good points were made by all and I invite you to click on the date link in the tweet read and join the discussion on Twitter. However the 140 character limit really bit today as including a few Twitter handles only allows a few characters left for the points. So I am starting this post so that I can also explain some points more effectively in longhand.
Cory put the thrust of my argument into a nutshell with this:
But I think we need clear definitions for both of these so that this is uniformally understood.
I am by no means an F# developer. I plan to learn it one day, but realistically it’s not going to happen until 2015. The same applies for D and Swift. I am currently learning Entity Framework and also need to learn more about Web API and NodeJS. If I tried to learn every new language that came along I would be an expert beginner in them all, but an actual expert in none of them.
Much of what Dave, Jay and Darren were saying reminds me of this:
Read books, articles, blogs, tweets. Got to conferences. Go to user groups. Participate in reading and study groups. Learn things that are outside your comfort zone. If you are a .NET programmer, learn Java. If you are a Java programmer, learn Ruby. If you are a C programmer, learn Lisp.
– Robert Martin, The Clean Coder
I definitely agree that different languages have their own strengths and it is important to know more than one. But we need to make choices about which ones are of most interest to us and weigh up the benefits of a shallow knowledge of many areas versus a deep knowledge of fewer areas.
Also what you learn is a different decision to what you adopt. If you’ve spent a lot of time learning a new technology, it is tempting to want to use that knowledge in your every day work. But there are several questions that need to be answered first:
Now I think there is a good discussion to be had on what the appropriate amount of time is to spend asking and answering questions like this? Surely it needs to be flexible according to the size and impact that it has on your company. A decision on whether to use, say a new jQuery plugin, should be a smaller and easier one than to adopt a whole new language.
New technologies tend to arrive with much more hype than criticism. This is because the only initial voices are from the innovators that created the technology and from early adopters. There are both benefits and risks from being an early adopter.
The point is that how you spend your programming days will depend on where you are on the technology wave. If you’re in the late part of the wave, you can plan to spend most of your day steadily writing new functionality. If you’re in the early part of the wave, you can assume that you’ll spend a sizeable portion of your time trying to figure out your programming language’s undocumented features, debugging errors that turn out to be defects in the library code, revising code so that it will work with a new release on some vendor’s library, and so on.
– Steve McConnell, Code Complete
This is a topic covered by Doug Turnure in the course “How to Have a Better Career in Software” where technology is the first of five principles for a better career:
Any specific technology is a temporary thing. It has a window of relevance and when that’s over, you need to learn something new. As developers we tend to emotionally attach to our current technology and defend it in all situations regardless of its appropriateness or fit in the business situation at hand.
Successful developers can look beyond their innate attachment to a technology and are mindful about when to use it, as well as when to move to a new technology.
Thanks for your comments both here and on reddit. I would like to respond to the following point
So if you are a .NET shop then go right ahead and write something in F# because there is seamless interop with everything else that is .NET. Your boss honestly doesn’t care one way or the other as evidenced by the fact that some places are still using perl.
Managers generally don’t mind too much because they don’t need to maintain the code themselves. I’m more concerned with the opinion of co-developers. An important principle in programming is “the principle of least surprise”, and if a developer starts reading code to understand how an application works and then finds it changes into a completely different language based on a different paradigm, then surprises don’t come much bigger than that.
I would not want to do it without first getting permission from every other developer on the team at a minimum. Because any of them could quite likely need to maintain it. It would be reasonable to come under heavy fire from both managers and co-developers for making a unilateral decision.
Perhaps a good strategy would be to identify a very small piece of functionality that is much better suited to functional programming, write the solution in both languages and then demo it to the rest of the team saying “We could solve it like this using our current language but we have to do this… and this… or we could use F# and solve it in only … lines of code. Here’s how it works… what do you think?”
I was wondering why more hasn’t already been written about this subject but then I found that this is partly covered in chapter 5 of Domain Driven Design. Page 118 is a cautionary tale, summarised as:
A story from an object oriented project of only a decade ago illustrates the risks of working in an immature paradigm…
…Was it impossible to use this technology for this application? We were out of our depth…
…Several months were lost in this recovery, in addition to the earlier months spent going down a failed path.
This chapter goes on to give advice on when it is or isn’t appropriate to use a non OO paradigm.
Update March 2015:
I have just watched Erik Dietrich’s course “Making the business case for best practices”. He covers 3 definitions for Best Practice: the ideal, the real, and the cynical. He also explains in detail what a business case is and explains that it is only a best practice for your company if it helps your company to earn more money. It covers financial concepts such ROI, payback period etc. and describes how to present a business case for various “best practices” in financial terms. I believe that this approach could be used to sell any new technology or tool your company, and I am going to try this approach myself.
I since used this approach to sell NSubstitute as a better alternative to Rhino Mocks. I maintain that Rhino Mocks was a fantastic product “back in the day”. This was a major reason why I asked Oren Eini for an interview recently.
Unfortunately Rhino Mocks hasn’t kept up with the times so well. NSubstitute, in the other hand has been a joy to work with. It’s more reliable, more intuitive and more readable. I’m finding that I am much more productive with it, however we still use Rhino Mocks as well. There is no need to spend a lot of time rewriting old tests that use Rhino Mocks.