Freekshow

April 2, 2009

Code Fest 01

Filed under: Community,Devnology — Freek Leemhuis @ 4:00 pm

Some night that was! I was well impressed with the people that showed up, discussion and banter flowed between them based on a shared passion for technology and interest in other people’s approaches.

codefest1

On this first Code Fest (all in Dutch, apologies) we had implementations of that old classic, James Conway’s Game of Life. On the night we had demonstrations of implementations in Flex, Erlang, Haskell, C++, Alloy, Java, CosMos and the Google App Engine. It certainly had my head spinning, and with what I thought was a cracking atmosphere we couldn’t have asked for more. We have invested a lot of time and energy in getting this thing of the ground, but yesterday evening alone was worth every bit of it.

Thanks all for a great evening.

On a sidenote, I was playing with this domain model to describe Devnology, and thought I’d share it here:

model

 

Geek as I am, it helps me focus and explain what I think we can achieve : Building a technology agnostic community by organising events that have high interaction between passionate developers. Enable interaction between academia and the software development industrie (software craftsmen, if you like).

With result like yesterday and the software testing event at TU Delft things are shaping up. Come and join us at our next meeting!

February 20, 2009

Devnology!

Filed under: Community,Devnology — Freek Leemhuis @ 9:24 am

The cat is out of the bag! It’s been a long time in the making, but finally…. coming to a theatre near you very soon… It’s…..devnology

Our first event is planned for the 1st of april (no joke, I promise) and is now open for registration. Do come!

February 14, 2009

Software estimation

Filed under: Estimation — Freek Leemhuis @ 9:44 pm

seatlemonorail

I have been trying to improve my estimating skills recently, I have always found it hard to do but I thought I’d share the things I have picked up that can make it somewhat easier to deal with.

Breaking up is hard to do

A question you get asked a lot from managerial types is: read these documents, then tell me how much will it cost to build this thing? It doesn’t have to be accurate, just give me a ball park figure. Are we talking three hundred, five or maybe eight hundred thousand euros?

Don’t do it. Even if you have an idea (it’s probably closer to twelve hundred), don’t say it. Don’t trust your ‘expert opinion’, because it is a very unreliable tool for performing a realistic estimate of this size. You can be way off. And if you are, it will come back and bite you in the a**.

The only way to get a reliable estimate is of course is to decompose the requirements into a work breakdown structure. The smaller the chunks, the better your estimate will be. It is so much easier to estimate a small size, and the process of breaking up large tasks into smaller ones will give you a much better idea of what needs to be done. I usually try to break down tasks into subtasks that are no larger then a day’s work.

What can possibly go wrong?

When preparing a quote we often need to align different estimates, and usually my estimates are higher then anyone else’s. Still, I’m usually convinced that mine are the more accurate 🙂

One thing that I picked up from reading Steve McConnel’s excellent book on Software Estimation is the Program Evaluation and Review Technique (PERT). This technique has been developed to counteract the natural tendency to estimate according to best case scenarios.

It turns out that if you ask a developer for a single point estimate, and then ask him to give a best case and a worst case scenario estimate, the single point estimate is in most cases very close to or equal to the best case scenario.

The PERT technique will counter this effect by asking to provide best case and worse case estimates, and also the mostl likely case estimate. A formula is then used to compute an expected case like so:

Expected Case = [Best Case + (4 * Most Likely Case) + Worst Case] / 6

Still, people tend to be optimistic when creating ‘most likely’ estimates, which is why Stutzke has presented an slightly altered variation :

Expected Case = [Best Case + (3 * Most Likely Case) + (2 * Worst Case)] / 6

If you are new to estimating, you will find that using this last formula will give you higher estimates. You will also find that they are much more accurate.

How long did you say this was gonna take you?

I didn’t, since I’m not the one that is actually going to build/test/document the system. Someone else is. So the estimate should be based on how long the task is going to take the person that is actually going to perform it.

I see a lot of estimates created by senior developers, and they can be a bit arrogant in their evaluation of the tasks. “Huh, I’ve build something similar on a number of occasions, it’s not so hard. This will take me only xx amount of time.”

Tough luck for the poor schmuck that is going to perform the task and has never done anything like it before. The estimate can not be wrong, can it, since it was done by someone with greater experience! So why is it taking so much time to implement this feature?

Keep it mean & lean, okay?

Finally, commercial considerations will often drive a sales manager, keen to make a sale, to give a few pointers as to what type of estimate he needs. “This client has money to burn, so don’t be shy”. Or, more likely, “We need to make this sale so keep it lean & mean, okay?”

Don’t let any of this influence your estimate in any way. Your job is to create an estimate of the size of the work that is involved. An estimator needs to provide estimate that is as accurate as possible, and the manager can decide, based on the accurate estimate, what to do about pricing or other considerations to make his pitch.

January 31, 2009

Learning

Filed under: Learning — Freek Leemhuis @ 9:06 pm

As a consultant, it’s sometimes difficult to find a new job that is a good match with your skills. We preferably want to learn on the job, but the person doing the hiring wants to find someone who already knows all the stuff that is required to do a good job.I think it was Noel Tichy who developed the model of concentric learning zones that can be used to illustrate the point.
zones1
If you want to learn, you need to get out of your comfort zone, but just enough to stay in the learning zone. If you cross over to the panic zone you’ll be thrashing rather than learning. So if I’m looking for a new assignment, I want to use skills that I do not yet fully master. Most people that are looking to hire a consultant however seem hell bent on finding someone that has demonstrated a mastery of the required skills, preferably for a number of years. They want to make sure the person they hire can do the job, and previous experience is what they perceive as the ultimate proof that the person is a good match. If the person can perform the skills required within their comfort zone, there’s no risk of them straying into the panic zone.

Requiring X number of years of experience for a language, platform of framework is not effective. Experience does not equate level of skill, and as David says, as long as applicants have 6 months to a year of experience, consider it a moot point for comparison.

There’s some strange job postings that I’ve browsed through the last month or so.

4 years of experience with c# 3.0? Who are they hoping to attract, Anders bloody Hejlsberg?

And how about this one: looking for unit testers. Wat are they? People who only write unit tests all day, and they are not allowed to actually write code?

Some of these job postings contain clues on how ignorant the company is on the matter. Someone really needs to tell them what they are looking for is a good software engineer, and rather then matching precise experience they would be better of looking at different attributes that point to good candidates. 

I like Jeff Atwood’s approach on finding the right candidate:

Have the candidate give a 20 minute presentation to your team on their area of expertise. I think this is a far better indicator of success than a traditional interview, because you’ll quickly ascertain..

  • Is this person passionate about what they are doing?
  • Can they communicate effectively to a small group?
  • Do they have a good handle on their area of expertise?
  • Would your team enjoy working with this person?

Please note that acquisition in response to this blog post might not be appreciated

December 10, 2008

The comment of all comments

Filed under: Programming — Freek Leemhuis @ 9:07 pm
These days I’m doing some code reviews for other people’s projects. Mostly these are architectural reviews, where we review an existing code base and comment on the design of the codebase, and sometimes also on software engineering practices (or lack thereof!).
One of the things that always comes up in discussions is the comments in the code. Tools such as Campwood Software‘s Source Monitor for example allow you to quickly produce a Kiviat Metrics Graph, such as the one displayed below.
no-comment

There are two different metric values for comments displayed: Documentation comments and actual comments.

Documentation comments

In the graph the value displayed for documentation lines (% Docs) is the number of documentation lines (indicated by ‘///’ at the start of the line) as a percentage of the total number of lines.
These documentation comments show up in the intellisense. For example, if have added documentation lines to the code below:

/// <summary>
/// Get the list of most popular jokes
/// </summary>
///
<param name="numberOfJokes"  ></param>
/// <returns>Generic List of Joke</returns>
public List<Joke> GetMostPopularJokes(int numberOfJokes)
{
   List<Joke> result = new List<Joke>();
   Joke jokeStub = new Joke();
   jokeStub.question = "How many developers does it take to change a roll of toilet paper?";
   jokeStub.punchline = "Who knows? It's never happened.";
   result.Add(jokeStub);
   return result;
}

Then if I call the method, the information in the documentation lines are displayed in the intellisense tooltip thingy:
comments_intellisense

In addition, tools like Sandcastle allow you to compile the comments into help files. Pretty neat?
If you’re Microsoft, and you actually have to deliver this kind of documentation, I suppose it is. In most other cases, I think this type of help documentation is pretty useless. It is out of date as soon as the code is updated, and I have seen on many occasions that the code gets update but the documentation comments are not. And if that does happen, will you ever trust the ‘documentation’ again?
There is no automatic link between comments and the code, and only the most disciplined team of programmers will never forget to adjust comments.
On the other hand, if you use concise variable and function names, should they not express the intent? What does the documentation code add, really?
Usually it is just a (partial) rephrasing of the actual code, and it violates the DRY principle. Have you ever done a lot of refactoring, and keeping the documentation lines up to date at the same time? If you have, I bet you’ll agree with me: this type of ‘documentation’ – let’s do without it.

Actual comments

I find it hard to attach any meaning to code metrics that express the number of in-line comments. It all depends on the purpose of the comments. The following example is from the Enterprise Library:

/// <summary>
// convert to base 64 string if data type is byte array
if (contextData.GetType() == typeof(byte[]))
  {
    value = Convert.ToBase64String((byte[])contextData);
  }
else
  {
    value = contextData.ToString();
  }

What does the line of comment add? Does it tell you anything that the code does not? Indeed not. Zero points. This kind of gratuitous comments is all too common, and if you have many of these, will you keep reading them? Will you not start to blank out the comments in your mind, thereby possibly missing the important comment that actually does explain a certain coding decision?
Please, only comment if you have something to add!
Would you not agree that having a very low count of comment lines is actually a good metric for quality? That it can be indicative for code that expresses intent very well, with a logical design?
Good code only rarely needs comments.

Martin Fowler, in his book Refactoring, indicates that comments can actually be a smell:

When you feel the need to write a comment, first try to refactor the code so that any comment becomes superflouos.

What are your comments?


November 4, 2008

Free your MIND – 2008

Filed under: Events,Microsoft — Freek Leemhuis @ 9:05 am

At the company we’re busy organising our annual MIND event, our internal event that we organize together with Microsoft Netherlands to keep our Microsoft community up to speed on the latest developments in Microsoft technologies. If you are interested in attending, drop me a line. Check it out here (in Dutch).

October 2, 2008

PostSharp: AOP for .Net

Filed under: .Net,AOP,Open Source — Freek Leemhuis @ 10:10 am

The updates to Microsoft’s reference architecture (find them on www.codeplex.com/AppArch) got me thinking about what a good reference implementation would be when adopting Domain Driven Design.

For now let’s focus on cross-cutting concerns, as they are depicted in the reference architecture.

 

The cannonical example for a cross-cutting concern is logging, and I’ve come across quite a few applications that had logging code scattered across the entire application. A better way to separate these concerns is by using Aspect Oriented Programming (AOP). In the .Net world there’s only a few frameworks that deal with AOP, and of these PostSharp is probably best known. The Enterprise Library ofcourse has the Policy Injection Application Block, which has similar functionality.

I have been spending some time with PostSharp, and I really like the improvement it can bring in the application design. And it’s really not hard to do. I’ll give a quick example: 

After you’ve installed the PostSharp bits you’ll need to include two references, to PostSharp.Laos and PostSharp.Public.

First, you wil need to create an aspect class, let’s name it TraceAspect.

[Serializable]
public sealed class TraceAspect : OnMethodBoundaryAspect       
{
    public override void OnEntry(MethodExecutionEventArgs eventArgs)
    {
       Console.WriteLine("User {0} entering method {1} at {2}" , Environment.UserName, eventArgs.Method , DateTime.Now.ToShortTimeString());           
    }
    public override void OnExit(MethodExecutionEventArgs eventArgs)
    {
       Console.WriteLine("User {0} exiting method {1} at {2}", Environment.UserName, eventArgs.Method, DateTime.Now.ToShortTimeString());
    }
}

A few things to note here:

  • You will need to mark the class with the Serializable attribute.
  • The OnMethodBoundaryAspect actually extends the Attibute class, so it’s like you’re creating a custom Attribute.
  • We’re implementing the aspect handlers by overriding the designated base class handlers, in this case OnEntry and OnExit.

Were now ready to apply the postsharp attribute to our classes to add tracing

class Program
    {
        static void Main(string[] args)
        {
            Customer cust = GetCustomer(1);
            SaveCustomer(cust);
            Console.ReadKey();
        }

        [TraceAspect]
        public static Customer  GetCustomer(int customerId)
        {
            Console.WriteLine("GetCustomer is executing");
            System.Threading.Thread.Sleep(2000); //taking it's time...
            return new Customer { FirstName = "Private", LastName = "Ryan" };
        }
        [TraceAspect]
        public static void SaveCustomer(Customer customer)
        {
            Console.WriteLine("SaveCustomer is executing");
            System.Threading.Thread.Sleep(3000); //taking it's time... again...
        }

        public class Customer
        {
            public string FirstName { get; set; }
            public string LastName { get; set; }
        }
    }

As you can see ,there’s not much to it but adding the [TraceAspect] attribute to our class. When executing this little sample you’ll get the expected output:

The magic is performed for us here by invoking the post-compiler that it uses to weave the AO code into the IL

So that when you use Reflector (I almost added Lutz’s name here…) to look at your final assembly, you can see the weaving that has been done:

PostSharp is a very powerful tool and it’s capable of a lot more than what I can show here. It’s a bit of a one-man band behind it, but it’s creator Gael Fraiteur has done a fine job of releasing it to open source and it’s actually got good documentation. Check it out on www.postsharp.org

 

September 19, 2008

Surface surfaces at NBC

Filed under: Microsoft,Microsoft Surface — Freek Leemhuis @ 2:25 pm

Microsoft Surface is really cool. It’s a multi-touch operated coffee table. Well, that’s not doing it justice. Check out this video where Tim Huckaby demonstrates some of the work Interknowlogy have been doing in health care using Surface.

 


 

 

Now in the run-up to the election NBC has found a use for Surface in mapping out the different states and how they fall to either Obama or McCain.

And, to top it off, this sarcastic little video is just the thing for a friday afternoon.

September 3, 2008

Unit testing part 3: Design for testability

Filed under: .Net,Microsoft,Unit Testing — Freek Leemhuis @ 11:20 am

This is part 3 in a series of posts about unit testing using Visual Studio. Part 1 and part 2 focused mainly on the MS Test environment within Visual Studio.

When writing unit tests, whether one subscribes to Test Driven Development practices or not, often forces one to consider the design of the code under test. This is probably where the term ‘Design for testability’ comes from: not all code is easily testable, and you should therefore design your code so it can easily be tested. I’ve always found the term strange, as the ultimate purpose of code is not that it is tested, but that it works and is maintanable. If you can manage loose coupling through OO principles such as the Single Reponsibilty Priciple and the Open Closed Principle, then the code is testable as a result of good design.

Or, as Uncle Bob says:

“The act of writing a unit test is more an act of design than of verification”

Having said that, what is generally meant by design for testability?

Consider the following class:

public class Clown
{
   public void TellAJoke()
    {
       Joke joke = new JokeRepository().GetRandomJoke();
       Console.WriteLine(joke.question);
       Console.Read();
       Console.WriteLine(joke.punchline);
       DrumRoll.Play();
   }
}

Just a few lines of code, and we’re already in a pickle. The code is not easily testable, and the problem is tight coupling. You can not test the TellAJoke method without also invoking the JokeRepository, Joke and DrumRoll class.

The GoF : Programming too an interface, not an implementation

(It’s actually in the first chapter, so you might catch it just before you fall asleep :-))

Let’s take this advise to heart and do some refactoring:

public class Clown
{
    ISoundEffect _soundeffect;
    IDisplay _display;
    public Clown(IDisplay display, ISoundEffect soundeffect)
    {
        _soundeffect = soundeffect;
        _display = display;
    }
    public void TellAJoke(IJoke joke)
    {
        _display.Show(joke.question);
        _display.Show(joke.punchline);
        _soundeffect.Play();
    }
}

We have defined interfaces for Joke, Display and Soundeffect. We can therefore call on the might of polymorphism if we want to use alternative methods of displaying text or using sound effects (we may want to switch to trumpets in a future iteration!). In this refactoring we are using dependancy injection to remove the coupling with Drumroll(ISoundeffects) and Console (IDisplay) by adding them as an argument to the constructor.

Note that we have also defined an interface for Joke, and added the joke as an argument to be passed into the method. Because our JokeRepository retrieves jokes from a database backend, we can not really use the repository in testing: we want to keep our tests lean and mean, and that means no database trips!

We can for the purpose of testing the TellAJoke method now subsitute the IJoke parameter with our own stub:

[TestMethod]
public void Clown_TellAJoke_Test()
{
    Clown pipo = new Clown(new ConsoleDisplay(), new DrumRoll());
    //create Joke stub
    Joke jokeStub = new Joke
    {
        question = "How many developers does it take to change a lightbulb?",
        punchline = "They won't. It's a hardware problem"
    };
    //redirect console output
    var  writer = new StringWriter();
    Console.SetOut(writer);
    //call the method under test
    pipo.TellAJoke(jokeStub);
   
    string output = writer.ToString();
    Assert.IsTrue(output.Contains(jokeStub.question));
    Assert.IsTrue(output.Contains(jokeStub.punchline));
   
}

Note that we are still using the console as the method to display text, but if we decide to change that, all we would need to do is implement the IDisplay interface on an alternative implementation. The key here is that we don’t have to change our Clown class to change the implementation of display or soundeffect.

Are you mocking to me?

So what about mocking? What’s that all about? The seminal reference in this regard is Martin Fowler’s article Mocks aren’t stubs, where he explains the difference between the two. Basically, if you are using stubs you’re usually testing state, whereas mocks allow you to test the interaction between objects.

Where we have in the previous example used a stub for the Joke class, we have left out tests for the soundeffect. What we would like to test is that the soundeffect sounds when delivering the punchline.  Let’s create a stub to substitute the soundeffect – we don’t want actual sound effects when running our unit tests, fun as that might be…

 

public class SoundEffectStub:ISoundEffect
{
    public void Play()
    {
        Console.WriteLine("Sound!");
        Console.Read();
    }
}

And we could test that the ‘sound’ gets played by redirecting the console output as before. However, we are testing state, and state that is introduced by our stub. An alternative here is to use a mocking framework that will allow us to test the interaction of these objects. The only thing we want to test is that the soundeffect gets played, and it’s for this kind of behavior verification that mocking frameworks really shine.

Typemock is a popular mocking framework in the .Net space, and we can use it for our little example as so:

 [TestMethod]
public void Clown_TellAJoke_Test()
{
    MockManager.Init();
    Mock mockSound = MockManager.Mock(typeof(SoundEffectDrumRoll));
    Clown pipo = new Clown(new ConsoleDisplay(),new SoundEffectDrumRoll());

    //create Joke stub
    Joke jokeStub = new Joke
    {
        question = "How can you tell when a programmer has had sex?",
        punchline = "When he's washing the pepper spray out of his eyes."
    };

    //redirect console output
    var  writer = new StringWriter();
    Console.SetOut(writer);

    //set expectations
    mockSound.ExpectCall("Play");

    //call the method under test
    pipo.TellAJoke(jokeStub);
   
    string output = writer.ToString();
    Assert.IsTrue(output.Contains(jokeStub.question));
    Assert.IsTrue(output.IndexOf(jokeStub.punchline) > output.IndexOf(jokeStub.question));

    mockSound.Verify();

}

Typemock is different from other mocking frameworks in that it uses the .Net profiler API to monitor and intercept program execution. This makes it very powerfull, because it is not tied to the same restrictions as other Mock frameworks. Rhino Mocks or similar solutions require that you code to interfaces, and any method you want to mock must be marked as virtual. Some people have even voiced concerns that Typemock is too powerfull, in that is does not force you to design for testability. I think Roy has put that argument to bed with his reply.

In the above example we can use the original SoundEffectDrumRoll implementation, and by setting up the expectation to the Play method Typemock will make sure the method does not actually get executed, and by using the Verify method we make sure the method was in fact called.

Especially when working with legacy code you will find that Typemock offers powerfull features to allow you to test code that would otherwise not be testable without refactoring. On the other hand, if you write new code and use Typemock, you can ignore some of the design issues we’ve talked about here. But that does not mean it’s the right thing to do.


August 18, 2008

JP’s contest

Filed under: .Net,Reading — Freek Leemhuis @ 9:47 am

Jean-Paul Boodhoo is an well-known authority on agily methodologies and patterns in the .Net community. He is author of many published articles, and I have tremendously enjoyed the series of webcasts he has produced for DNRtv on Demystifying Design Patterns.

JP also organizes the Nothing but .Net training bonanza’s, and reading the description of these bootcamp style events have always had me thinking of ways to be able to sign up for one of them. So when he opened a competition for submitting stories that would foster the passion in developers I put in a few words. And lo and behold, I made it in the top 5! So now you can vote for me or any of the other contestants to give your favorite author the chance to enjoy a great week of training.

UPDATE:

So I did not win the top price, so I guess I need to persuade my company to stump up the money for the training course! I did come in third, which JP is generous enough to award with a 130 dollar Amazon voucher. I’m planning to spend it on this set of books:

Feather’s book has had some rave reviews, and I’m curious what tips and practices it offers, especially since I will be working on a lot of legacy code in the near future it seams…
This book apparantly has set a new standard in the publishing of programming books. It’s got syntax code coloring! And it’s a very good book on WPF, which I want to dive into a bit more
McConnel is my favorite author, and this is one of his books I have yet to get my hands on
Uncle Bob is another one of my favorites, and this brand new tome will no doubt contain many a pearl of wisdom. Dispite the ugly cover.

Thanks to JP for stacking up my reading list!

« Previous PageNext Page »

Blog at WordPress.com.