The Coding Monkey

Friday, September 30, 2005

I'll Be Sticking to NUnit Thanks

I've recently been playing with the Visual Studio 2005 betas, including the new unit testing framework that they have included. First impressions with this sort of new offering are always big with me... and my first impression was that it sucks... and sucks royally. That's a bold statement I know. The reason I say this is because it is missing one huge, gigantic, extremely important feature that NUnit has. NUnit allows you to inherit from a test class, and Visual Studio doesn't... and won't for the first release according to this bug report on Microsoft's site.

Why would you want to do this anyway? Allow me to paint the picture. If you're like me, you were hugely disappointed with Microsoft's initial offerings of collections. Hashtable and ArrayList just aren't enough for some of us. What about Binary Search Trees, and Skip Lists? What about a Set class? Being the good general purpose developer that I am, I wrote my own. I also decided that in order to make them easily usable among my peers, I made sure that they conformed to the standard Microsoft collection interfaces, like IDictionary. Also being the good general purpose developer, I wanted to write unit tests that fully tested my new collection classes... but I didn't want to write a separate test for each of my new classes. After all, if all my collections implement IDictionary, I should be able to just write one test that runs against IDictionary, and then create simple classes that inherit from that big one, and simply create the specific collection to test against in the inherited class... which is exactly what I did. It's a rather big test class too which not only tests the basic functionality, but also tests all the out of bounds cases for the IDictionary methods, and the border cases too.

So naturally, when I got a hold of the betas of Visual Studio 2005, I migrated my collection classes over, and improved them to use the new capabilities of generics. I also migrated over my NUnit test cases to use the new unit testing functionality built in. I find it strange that a tool, which promotes good Object Oriented design and programming practices would not allow you to use those same practices for unit testing that code. This was the very first thing I tried to do... and it failed miserably. You simply aren't allowed to do it in the first version. I think this is a huge mistake by Microsoft.

Wednesday, September 28, 2005

Killing Readability?

There's been a lot of action recently in blogs over at Microsoft talking about C# 3.0. I won't even begin to discuss how silly this is since C# 2.0 isn't even out of beta yet... but anyway. The coolest feature to be included is something called Linq. If you even looked at C-Omega, then Linq ought to look very familiar. The basic idea is to allow you to inject SQL type language structures directly in your code, make data calls more type safe, and have result sets be first class objects in your code. This basically gets rid of funky data layers that try to meld between your database and business objects. It's a very cool idea. Cyrus has been talking a lot about here and here.

Linq has naturally been getting a lot of the attention. What's been getting less attention is another new feature called "Implicitly Typed Local Variables". Cyrus recently blogged about it here. The idea is to take code like this:

int i = 5;
string s = "Hello";
double d = 1.0;
int[] numbers = new int[] {1, 2, 3};
Dictionary<int,Order> orders = new Dictionary<int,Order>();

And turn it into code like this:

var i = 5;
var s = "Hello";
var d = 1.0;
var numbers = new int[] {1, 2, 3};
var orders = new Dictionary<int,Order>();

To be very clear. Var is not like object! Variables declared with var will still be of their specific type. The difference is that now the compiler will figure out what type it is for you, and generate the right type under the hood. So the generated IL for both would be the same. In other words, it's just syntactic sugar.

So this makes for less work which is a good thing right? Here is my fear. I think it will cut down on readability. As the .NET languages progress, Microsoft continues to add things to the language that require you to make use of intellisense to figure out. If I see the following:

var foo = BarWhichReturnsSomething();

I have to use intellisense to figure out what foo really is. Call me old fashioned, but I still actually print out code on paper (I'm no environmentalist) when I review code... especially other people's code. You can't exactly hold your mouse over a piece of paper and have a tooltip pop up can you?

I find this feature especially strange since many of the language requirements in C# are built around making code explicitly more readable. My understanding is that Anders Hejlsberg (the father of C#) was a stickler for this sort of thing. For instance, if you declare a method as taking an out parameter, you have to also put out on the variable when you call the method. When you override a virtual method in an inherited class, you have to put override on your method. Neither of these to things are actually needed (the compiler is perfectly capable of inferring both), but C# requires it because it makes code more readable, and therefore reduces bugs (hopefully).

So why the change in this paradigm? Where are you Anders?

An Engineer Who Failed His First Test

I've been out of engineering school for five years now. I think that's given me sufficient time to look back on my education as it's related to my career thus far, and to see how my education prepared me for my job. All I can say is that I was damned lucky to go where I did. I'm reminded of this after reading Confessions of an Engineering Washout on Tech Central Station today:

I am an engineering washout. I left a chemical engineering major in shame and disgust to pursue the softer pleasures of a liberal arts education. No, do not pity me, gentle reader; do not assuage your horror and dismay at my degradation by flinging a filthy quarter into my shiny tin cup. Instead, hear my story, and learn why the United States lacks engineers.

Not long ago, I showed up for my first year at Smartypants U., fresh from a high school career full of awards and honors and gold stars. My accomplishments all pointed towards a more verbal course of study, but I was determined to spend my college days learning something useful. With my strong science grades and excellent standardized test scores, I felt certain that I could handle whatever engineering challenges Smartypants U. had to offer. Remember: Kern = real good at math and science. You will have cause to forget that fact very soon.

What follows is a very interesting article on his experience of living with the consequences of failing his first Engineering test. What is most interesting is that he never mentions the first test he failed. To be honest, I don't even know if he realizes he failed it, or even took it.

To what magical esteemed test am I referring? He picked the wrong school. Picking the right school is probably the most important test an aspiring engineer will ever take. From the sounds of it, our friend Doug went to the school with a stellar reputation for taking in brilliant people. What more and more people are realizing however is that these schools that take in brilliant people seem to think that accepting you into their little club is the most important part of college. What comes after the entrance exam is just gravy. What you learn in that college seems to be less important than then fact that you now own that coveted piece of paper with Smartypants U written on the top. Congratulations, you are now the proud owner of a piece of paper, instead of an education.

I say these things having had numerous conversations with several people who either have gone to, or know people who have gone to these types of institutions. I always laugh, because I never seemed to experience the same issues that they did. I've come to realize that it's simply because I picked the right school... and a damned unique one at that. I went to a school that:

  • Doesn't tenure professors
  • Rarely hires a professor that doesn't have significant experience in industry
  • Doesn't use teaching assistants
  • Has relatively small class sizes (on the order of 30 or less... seriously)
  • Actually devotes significant parts of classes to laboratory work
I wonder how much different his experience would have been had he just picked a better place to learn.

Tuesday, September 06, 2005

How the GPL Will Kill Free Software

MSNBC has an article about a proposed update to the GNU Public License:

The free software association said on Tuesday it would start adapting rules for development and use of free software by including penalties against those who patent software or use anti-piracy technology.
The license needs to be adapted to a world in which e-commerce firms like have patented 'one click ordering' which prevents software makers from freely using such a feature in their programs, said the president of the Free Software Foundation Europe, Georg Greve.

"Software patents are clearly a menace to society and innovation. We like this to be more explicit. The basic idea is that if someone patents software, he loses the right to use free software. It's like a patent retaliation clause," Greve said.

Such a clause may have a big impact, because many commercial companies have benefited from free software. The GPL is employed by tens of thousands of software projects, and companies and governments around the world use it in their software or services.

In essence what GNU is doing is to declare war on anyone who dares to make a profit off of software by patenting software processes. I think this could backfire severely on the free software movement. Before I continue, I do think that some of the patents that have been granted to software companies are nuts. Many of these stupid patents (Microsoft's IsNot patent, Amazon's One Click Ordering patent, and Microsoft's Text Highlighting patent) should never have been granted. The reason is because they clearly exist in prior art, or because they're not novel (both being requirements for a patent). But their abuse doesn't make all software patents evil, it simply means that the USPTO needs an overhaul to allow them to be more effective at their lawful purpose. They obviously don't understand what currently exists in prior art or is novel in the context of software. This is understandable as software patents are a relatively new enterprise for the USPTO.

With that said, I do think this will backfire for GNU. The reason I say this is because much of the GPL'd software out there right now is provided to the public by companies that hold software patents, or would like to have software patents. While much of the free software community see GPL'd software as a movement for all to fully embrace, most companies do not. They see the GPL more as charity work, or a donation back to the software community. They view it as something to give back to others. While this may seem like an insult to many people out there who subsist on GPL'd software and code, one should not look a gift horse in the mouth.

If you reject these companies because they also hold software patents, you won't find them giving up on the idea of patenting software. Instead you'll find that they'll give up on providing as much GPL'd software as they use. That would be a loss to the free software community that it simply cannot afford.