November 4, 2008

Free your MIND – 2008

Filed under: Events,Microsoft — Freek Leemhuis @ 9:05 am

At the company we’re busy organising our annual MIND event, our internal event that we organize together with Microsoft Netherlands to keep our Microsoft community up to speed on the latest developments in Microsoft technologies. If you are interested in attending, drop me a line. Check it out here (in Dutch).


September 19, 2008

Surface surfaces at NBC

Filed under: Microsoft,Microsoft Surface — Freek Leemhuis @ 2:25 pm

Microsoft Surface is really cool. It’s a multi-touch operated coffee table. Well, that’s not doing it justice. Check out this video where Tim Huckaby demonstrates some of the work Interknowlogy have been doing in health care using Surface.




Now in the run-up to the election NBC has found a use for Surface in mapping out the different states and how they fall to either Obama or McCain.

And, to top it off, this sarcastic little video is just the thing for a friday afternoon.

September 3, 2008

Unit testing part 3: Design for testability

Filed under: .Net,Microsoft,Unit Testing — Freek Leemhuis @ 11:20 am

This is part 3 in a series of posts about unit testing using Visual Studio. Part 1 and part 2 focused mainly on the MS Test environment within Visual Studio.

When writing unit tests, whether one subscribes to Test Driven Development practices or not, often forces one to consider the design of the code under test. This is probably where the term ‘Design for testability’ comes from: not all code is easily testable, and you should therefore design your code so it can easily be tested. I’ve always found the term strange, as the ultimate purpose of code is not that it is tested, but that it works and is maintanable. If you can manage loose coupling through OO principles such as the Single Reponsibilty Priciple and the Open Closed Principle, then the code is testable as a result of good design.

Or, as Uncle Bob says:

“The act of writing a unit test is more an act of design than of verification”

Having said that, what is generally meant by design for testability?

Consider the following class:

public class Clown
   public void TellAJoke()
       Joke joke = new JokeRepository().GetRandomJoke();

Just a few lines of code, and we’re already in a pickle. The code is not easily testable, and the problem is tight coupling. You can not test the TellAJoke method without also invoking the JokeRepository, Joke and DrumRoll class.

The GoF : Programming too an interface, not an implementation

(It’s actually in the first chapter, so you might catch it just before you fall asleep :-))

Let’s take this advise to heart and do some refactoring:

public class Clown
    ISoundEffect _soundeffect;
    IDisplay _display;
    public Clown(IDisplay display, ISoundEffect soundeffect)
        _soundeffect = soundeffect;
        _display = display;
    public void TellAJoke(IJoke joke)

We have defined interfaces for Joke, Display and Soundeffect. We can therefore call on the might of polymorphism if we want to use alternative methods of displaying text or using sound effects (we may want to switch to trumpets in a future iteration!). In this refactoring we are using dependancy injection to remove the coupling with Drumroll(ISoundeffects) and Console (IDisplay) by adding them as an argument to the constructor.

Note that we have also defined an interface for Joke, and added the joke as an argument to be passed into the method. Because our JokeRepository retrieves jokes from a database backend, we can not really use the repository in testing: we want to keep our tests lean and mean, and that means no database trips!

We can for the purpose of testing the TellAJoke method now subsitute the IJoke parameter with our own stub:

public void Clown_TellAJoke_Test()
    Clown pipo = new Clown(new ConsoleDisplay(), new DrumRoll());
    //create Joke stub
    Joke jokeStub = new Joke
        question = "How many developers does it take to change a lightbulb?",
        punchline = "They won't. It's a hardware problem"
    //redirect console output
    var  writer = new StringWriter();
    //call the method under test
    string output = writer.ToString();

Note that we are still using the console as the method to display text, but if we decide to change that, all we would need to do is implement the IDisplay interface on an alternative implementation. The key here is that we don’t have to change our Clown class to change the implementation of display or soundeffect.

Are you mocking to me?

So what about mocking? What’s that all about? The seminal reference in this regard is Martin Fowler’s article Mocks aren’t stubs, where he explains the difference between the two. Basically, if you are using stubs you’re usually testing state, whereas mocks allow you to test the interaction between objects.

Where we have in the previous example used a stub for the Joke class, we have left out tests for the soundeffect. What we would like to test is that the soundeffect sounds when delivering the punchline.  Let’s create a stub to substitute the soundeffect – we don’t want actual sound effects when running our unit tests, fun as that might be…


public class SoundEffectStub:ISoundEffect
    public void Play()

And we could test that the ‘sound’ gets played by redirecting the console output as before. However, we are testing state, and state that is introduced by our stub. An alternative here is to use a mocking framework that will allow us to test the interaction of these objects. The only thing we want to test is that the soundeffect gets played, and it’s for this kind of behavior verification that mocking frameworks really shine.

Typemock is a popular mocking framework in the .Net space, and we can use it for our little example as so:

public void Clown_TellAJoke_Test()
    Mock mockSound = MockManager.Mock(typeof(SoundEffectDrumRoll));
    Clown pipo = new Clown(new ConsoleDisplay(),new SoundEffectDrumRoll());

    //create Joke stub
    Joke jokeStub = new Joke
        question = "How can you tell when a programmer has had sex?",
        punchline = "When he's washing the pepper spray out of his eyes."

    //redirect console output
    var  writer = new StringWriter();

    //set expectations

    //call the method under test
    string output = writer.ToString();
    Assert.IsTrue(output.IndexOf(jokeStub.punchline) > output.IndexOf(jokeStub.question));



Typemock is different from other mocking frameworks in that it uses the .Net profiler API to monitor and intercept program execution. This makes it very powerfull, because it is not tied to the same restrictions as other Mock frameworks. Rhino Mocks or similar solutions require that you code to interfaces, and any method you want to mock must be marked as virtual. Some people have even voiced concerns that Typemock is too powerfull, in that is does not force you to design for testability. I think Roy has put that argument to bed with his reply.

In the above example we can use the original SoundEffectDrumRoll implementation, and by setting up the expectation to the Play method Typemock will make sure the method does not actually get executed, and by using the Verify method we make sure the method was in fact called.

Especially when working with legacy code you will find that Typemock offers powerfull features to allow you to test code that would otherwise not be testable without refactoring. On the other hand, if you write new code and use Typemock, you can ignore some of the design issues we’ve talked about here. But that does not mean it’s the right thing to do.

August 13, 2008

Devdays 2008 videos

Filed under: .Net,Events,Microsoft — Freek Leemhuis @ 8:20 pm

Just a quick note here to point those of you who are interested to the online videos of the Devdays 2008

June 26, 2008

Unit testing with Visual Studio 2008 – Part 2

Filed under: Microsoft,Unit Testing — Freek Leemhuis @ 3:03 pm

This is the second part of a series of blogpost on unit testing. Find the first part here.

In the first part we’ve introduced the MS Test framework, and I realise now that I’ve left out a few bits and pieces, so first off let me try to make amends. 

Initialize and Cleanup

Many times when running a test you’ll want to set up some conditions under which the test will run. After completion you’ll want to clean up these artifacts. Similarly to NUnit, where you’ll find attributes as [Setup] and [TearDown], MS Test has some attributes you can use to accomplish this.

To set up state before every test method in a test class, you can decorate a method with the [TestInitialize] attribute. Similarly, to clean up you can use the [TestCleanup] attribute.

For example, suppose you are testing some xml parsing routines. If you are testing agains some sample xml fragment, and it’s the same fragment for all your test methods in a class, you can set the fragment using the [TestInitialize] attribute:

private MyDataContext db; 

private void TestInit()  
     db = new MyDataContext(

Similarly, you can use the [TestCleanup()] attribute to dispose of any state.

private void WrapUp()
       db = null;

Note: you only use a db connection setup like this when testing data access routines. For any other routines you do not really want the tests to hit the database.

So that’s pretty convenient: you don’t have to add the initialization code in the methods themselves. However, the opening and closing of the database will occur as many times as there are test methods in your test class.

You can use the [ClassInitialize] and [ClassCleanup] attributes for state changes that will execute only once for all test methods within a test class. However, for methods to use these attributes they must be static, which means that you can not use it to set properties on the instance of the test class.

Testing for exceptions

A lot of times I’ve seen developers using a try/catch block to test for exceptions. Joel for example has posted this little code snippet:

public void TestZipCodeNotNumericThrowsArgumentException()
       TransactionRequestInfo req = new TransactionRequestInfo();
       string testValue = "034JB";
           req.Zip = testValue;
           Assert.Fail("Exception not thrown.");
       catch (ArgumentException aex)
           Assert.IsTrue(true, "Exception Thrown Properly");

You can see what he’s trying to do, and I’ve seen many a test use this approach. There is a better way: let’s rewrite this using the appropriate ExpectedException attribute:

public void TestZipCodeNotNumericThrowsArgumentException()
    TransactionRequestInfo req = new TransactionRequestInfo();
    string testValue = "034JB";
    req.Zip = testValue;

You can see it pays to investigate the full capabilities of the testing framework. Note that one would probably want to use a more specific exception than ArgumentException (InvalidZipcodeException?), but that’s for another post and another day.  

On cleanup and debugging

When dealing with databases, you might on occasion use a method with the TestInitialize attribute to verify preconditions in the database, and similar you have a cleanup routine to remove any artifacts. What happend to me on more than one occasion when starting with unit tests is that I would run tests under debugging, and when an expection occurred I would stop the debugging session. This however would prevent execution of any cleanup routine, and the state would not be properly reset before the following test run. Be sure to continue debugging to allow your cleanup code to execute!  

Keyboard Shortcuts

To wrap up the specifics of the MS Test frameworks, here’s a list of keyboard shortcuts to run your tests. The running of tests through the test windows is a bit cumbersome to navigate, so these keyboard combinations will come in handy:

CTRL + R, then press T
This runs the test(s) in the current scope. That is, it runs the current test method, all the tests in the current test class, or all the tests in the namespace, respectively.

CTRL + R, then press CTRL + T
This also runs the test(s) in the current scope, but under debugging.

CTRL + R, then press C
This runs all the tests in the current test class.

CTRL + R, then press A
This runs all the tests in the solution.


MSDN documentation on Unit Testing

Write Maintainable Unit Tests That Will Save You Time And Tears (MSDN article by Roy Osherove)

If you’re looking for a good book on unit testing, I can recommend Pragmatic Unit Testing in C# with NUnit , by Andrew Hunt and David Thomas. The authors use nUnit as the test framework, but the methods and practices apply regardless of what framework you use.

Roy Osherove shares his thoughts on unit testing and his upcoming book on The Art of Unit Testing/

June 2, 2008

DevDays 2008 impressions

Filed under: .Net,ADO.Net Data Services,Entity Framework,Events,Microsoft — Freek Leemhuis @ 1:31 pm


The keynote this year was titled ‘Why Software Sucks’, by .Net professor David Platt. I’ve missed most of it while lining up to get tickets (thanks Mark ;-)) , but it was basically the same session that Platt has been delivering for a number of years now, most recently in TechEd Barcelona in 2007, and I was a bit surprised when I found out this talk was now promoted to keynote for the DevDays. Must have been a slow day at the office for original content or new announcements….
If you’ve not seen Platt’s talk before, it’s pretty entertaining. You can watch it (from a similar session) online here.

Silverlight 2.0

The session from Daniel Moth was about Silverlight 2.0. Where previous versions of Silverlight were all about media, video delivery etc, you could only program in javascript to make things happen. With version 2.0 you can finally write managed code to run in the browser. This, combined with the power of XAML makes for a very compelling platform to deliver RIA’s (most self-respecting conferences these days includes a RIA (Rich Internet Application) track). Silverlight 2 of course was announced during Mix, so if you want to check it out go watch the sessions on Silverlight on They’ve recently redone the sessions so that the streaming includes the presenter as well as a separate stream to show the slides and demo’s.

The ADO.Net Entity Framework

The ADO.Net Entity Framework session from Mike Taulty was a good introduction into the subject. Mike pointed out a new website where you can find news, tutorials and other resources on new data technologies such as the Entity Framework and the ADO.Net Data Services. I was a bit puzzled when Mike spend considerable time of his session on how you can still use the old-fashioned ADO api (datareader, command) to program against the EF. I can think of only a small number of cases when you’d want to do that.
Check out this webcast for more details on the EF.

WCF on the Web

For me, the most interesting session of the day was delivered by Peter Himschoot, who showed what additional work has been done in WCF for the web in version 3.5. More specifically, WCF now supports JSON and REST. It’s interesting to see that a framework like WCF has been designed to a high enough level of abstraction that, while it was build when services were all very much soap-oriented, it has now been extended to include new concepts like JSON and REST.

ASP.Net MVC Framework

On the Friday, Alex Thissen kicked off with an introduction to the MVC framework. The MVC framework will be an alternative to the current webforms model. It allows the programmer to control the HTML markup, rather then having it generated by user controls. It does away with postback and viewstate, so you get a much cleaner model, that allows for better Separation of Concerns and better testability. As always, Alex is very thorough and I was impressed to see he managed to sprinke Unity and Moq in his demo without loosing the audience.


Next up was Anko Duizer who discussed various options to include LINQ to SQL in your architecture. Do you regard LINQ to SQL as your data layer, or do you just use it as part of your datalayer? This was a good follow-up to Anko’s previous introductory sessions on LINQ to SQL, and it addressed some of the difficulties that you can run into when you need to figure out the best practices for leveraging LINQ and LINQ to SQL.

ADO.Net Data Services

Mike Taulty then had another session, this time on the ADO.Net Data Services (codename Astoria). Using this technologie you can take a data model like LINQ to SQL or an Entity Framework model, and make the classes available through REST-based services. The framework will be made available in service pack 1 for Visual Studio 2008, currently in beta.


Marcel de Vries showed some of the new features of Rosario, the upcoming new version of Visual Studio and the .Net framework. His talk focused mostly on Team System. The primary goal of the new Rosario features is to bring together the three main stakeholders of a software project — business and IT governance, IT operations and development. Some of the new features include:

historical debugging: a new testrunner application allows a tester to record test runs, which can then be replayed on a programmer’s machine, thereby reproducing the bugs that the tester has stumbled upon, but also creating a debug session where there previously was none!
This should get rid of some ‘but it works on my machine’ discussions….

Functional testing– codename Camano- : a test manager for running functional tests. It provides test execution assistance, workflow, rich bug logging and reporting.

Marcel also showed some features of the Team Architect edition, that now includes… UML support! Ever wished you could generate a sequence diagram from existing code? I had noticed this through Clemen’s blog and I’m a bit puzzled to see Microsoft performing this U-turn, where they have previously stayed well away from anything related to UML. I’m intrigued enough to go and try this out to see how valuable the additions will be.

Dynamic languages and the DLR

Finally, I managed to catch Harry Pierson‘s session on dynamic languages and the Dynamic Language Runtime (DLR).  I have a fascination for the differences between different programming languages and paradigms, and the initiative from Microsoft to enable the use of existing dynamic languages on the .Net platform is a very interesting one.

The question many people who are using a statically typed language on the .Net platform will pose is: why would I want to (also) use a dynamic language. Harry really brought it home to me: with the DLR and the supported languages Microsoft aims for the developer that is currently using Python or Ruby, and get them on board by making it easy for them to switch to the Microsoft platform.

So, if you’re not currently using dynamic languages, should you care about this stuff? Well, if you are a believer in polyglot programming you should. This is the idea that within an application you would use multiple languages, and you select the language that is best fits the particular concern you’re trying to address. For example, in a Model-View-Controller application, you would write the view in HTML and javascript, the controller using a dynamic language like IronPython and the model in a statically typed language like C#. Read the chapter on polyglot programming in the recently released Thoughtworks Anthology for more information.

One interesting thing to note on the DLR is that the original plan for the DLR was to release it with four languages that were going to be supported: IronRuby, IronPython, JavaScript and VBX, where the last one was a new dynamic variant of Visual Basic. The last one has now apparently been dropped, and the DLR will be released initially with just the first three languages. It looks like Microsoft has not yet made its mind up considering the future of VB. 

When .Net first came out, the differences between the implementations of VB and C# were surprisingly few, and the choice of customers between these two would invariably hinge on the history and familiarity of their existing programmers base, rather then on the merits of the particular language.  With the recent additions in VB like xml literals, these languages seem to start to drift apart again, and I would very much like to see people preferring one over the other because they like the language features better, not just because it’s what they are used to.

So the question is, will Microsoft rediscover VB as a dynamic language? That’s why I was curious to see how the VBX implementation for the DLR was taking shape…. I spoke to Harry about this, and he was rather tight-lipped about it but hinted that we might get an announcement on these issues at the upcoming PDC.

And so..

All session on DevDays were recorded on video, so I’ll keep you posted when materials will be made available online. If you attended, let me know what you thought…


May 27, 2008

Microsoft MCPD Certification for .Net 3.5

Filed under: .Net,Certification,Microsoft — Freek Leemhuis @ 6:11 pm

Microsoft has recently published more details on the certification tracks for the .Net framework 3.5.
Most of my colleagues are or try to become MCPD for .Net 2.0. Below are the details of what you will need to do to get certifiied on the .Net 3.5 platform:

There’s different MCPD (Microsoft Certified Professional Developer) tracks : You’re either a Windows, ASP.Net or an Enterprise developer. For the ASP.Net MCPD track here’s what you will need to do:

1. Pass the 70-536 exam: Application Development Foundation.
If you’re currently MCPD you already hold this exam.

2. Certify as MCTS: .NET Framework 3.5, ASP.NET Applications
You do this by passing a choice of 2 out of the following exams:
Exam 70-502: TS: Microsoft .NET Framework 3.5, Windows Presentation Foundation Application Development
exam available.

Exam 70-503: TS: Microsoft .NET Framework 3.5, Windows Communication Foundation Application Development
exam available.

Exam 70-504: TS: Microsoft .NET Framework 3.5, Windows Workflow Foundation Application Development
exam available.

Exam 70-505: TS: Microsoft .NET Framework 3.5, Windows Forms Application Development
expected August 2008

Exam 70-561: TS: Microsoft .NET Framework 3.5, ADO.NET Application Development
expected June 2008

Exam 70-562: TS: Microsoft .NET Framework 3.5, ASP.NET Application Development
expected June 2008

The last one is mandatory for the ASP.Net track, so you’ll need that one plus a choice of one out of the others above. Some of the exams are not yet available, so if you want to take one now I’d start with the WCF exam.

After you’ve passed the two exams you can upgrade your MCTS certification to MCPD by taking the last exam:

3. Pass the MCPD 70-564 exam.

Exam 70-564: PRO: Designing and Developing ASP.NET Applications using Microsoft .NET Framework 3.5
expected Dec 2008

There will be an upgrade exam:
Exam 70-567: Upgrade: Transition your MCPD Web Developer Skills to MCPD ASP.NET Developer 3.5 (available soon)

Our experience in running the 2.0 track has shown that people have had better success rates by taking the individual exams, so that’s what I would advise.

March 24, 2008

Change tracking in LINQ to SQL

Filed under: .Net,Linq,Microsoft — Freek Leemhuis @ 9:49 pm

I was catching up with the ASP.NET Dynamic Data controls that will ship with the upcoming ASP.Net extension pack, and it struck me that this is a framework that, much like LINQ to SQL, is excellent for writing demo applications. Without writing too much code, you can very quickly create something that looks and even acts like a ‘real’ application.

Here’s the thing: I don’t get paid to write demos. Any real merit I get from any framework is if it helps me build ENTERPRISE applications.

Don’t get me wrong, I absolutely love LINQ, and some of the new language enhancements that have come with it. These days when I’m working in a team using Visual Studio 2005  I hate not being able to use LINQ, object initializers, automatic implemented properties and the like.

Looking at LINQ to SQL though, it seems to have been designed without focus on how you would actually use this in a multi-layer environment. Sure, you can use it to replace your Data Access Layer, but the passing around of data classes using Attach leaves a lot of plumming to be done manually and the whole ‘replay your changes, then submit’ feels awkward at best and looks to have been put in almost as an afterthought.

I understand that the design goal of Persistence Ignorance prevent any persistence plumming in the objects themselves. I am probably a heritic in the eyes of OO-purist, but I care more about productivity then about my objects qualifiying as POCO. It would be nice if a framework like LINQ to SQL would offer you a choice to go the POCO route, or actually have some persistence coding that would take care of change tracking over data tiers. Datasets, spawn of the devil they may be, have a convenient DiffGram model that allows one to pass changes to data over tiers. If a persistence framework can solve this for you, it would make life much easier.  LLBL Gen has it. Can we not have something in LINQ to SQL (or the Entity Framework!) that would allow one to choose how to handle change tracking?

Dinesh has written about the design goals of LINQ to SQL. What’s interesting is that he says in this post:

we don’t yet have a simple roundtripping and change tracking on the client.

(Emphasis mine) So it would seem they have given it some consideration, but must have decided to keep it clean and simple, and left it out. Will we see some enhancements in this area for LINQ to SQL? Matt Warren has posted a reply to this forum post:

There is also a technology in the works that automatically serializes LINQ to SQL objects to other tiers, change tracks and data binds the objects for you on that tier and serializes the objects back, solving the cyclic reference serialization problem and the indescribable schema problem for objects with change sets

Further on Matt describes how the Attach method was enhanced so that you can include the old version and new version of an entity object. The ‘technology in the works’ however is more like the tidbits that have come out regarding a mini-connectionless DataContext . Matt has written here that it was not implemented …for lack of time.

I have found no information on when (or if) this will be available.

Blog at