Freekshow

October 2, 2008

PostSharp: AOP for .Net

Filed under: .Net,AOP,Open Source — Freek Leemhuis @ 10:10 am

The updates to Microsoft’s reference architecture (find them on www.codeplex.com/AppArch) got me thinking about what a good reference implementation would be when adopting Domain Driven Design.

For now let’s focus on cross-cutting concerns, as they are depicted in the reference architecture.

 

The cannonical example for a cross-cutting concern is logging, and I’ve come across quite a few applications that had logging code scattered across the entire application. A better way to separate these concerns is by using Aspect Oriented Programming (AOP). In the .Net world there’s only a few frameworks that deal with AOP, and of these PostSharp is probably best known. The Enterprise Library ofcourse has the Policy Injection Application Block, which has similar functionality.

I have been spending some time with PostSharp, and I really like the improvement it can bring in the application design. And it’s really not hard to do. I’ll give a quick example: 

After you’ve installed the PostSharp bits you’ll need to include two references, to PostSharp.Laos and PostSharp.Public.

First, you wil need to create an aspect class, let’s name it TraceAspect.

[Serializable]
public sealed class TraceAspect : OnMethodBoundaryAspect       
{
    public override void OnEntry(MethodExecutionEventArgs eventArgs)
    {
       Console.WriteLine("User {0} entering method {1} at {2}" , Environment.UserName, eventArgs.Method , DateTime.Now.ToShortTimeString());           
    }
    public override void OnExit(MethodExecutionEventArgs eventArgs)
    {
       Console.WriteLine("User {0} exiting method {1} at {2}", Environment.UserName, eventArgs.Method, DateTime.Now.ToShortTimeString());
    }
}

A few things to note here:

  • You will need to mark the class with the Serializable attribute.
  • The OnMethodBoundaryAspect actually extends the Attibute class, so it’s like you’re creating a custom Attribute.
  • We’re implementing the aspect handlers by overriding the designated base class handlers, in this case OnEntry and OnExit.

Were now ready to apply the postsharp attribute to our classes to add tracing

class Program
    {
        static void Main(string[] args)
        {
            Customer cust = GetCustomer(1);
            SaveCustomer(cust);
            Console.ReadKey();
        }

        [TraceAspect]
        public static Customer  GetCustomer(int customerId)
        {
            Console.WriteLine("GetCustomer is executing");
            System.Threading.Thread.Sleep(2000); //taking it's time...
            return new Customer { FirstName = "Private", LastName = "Ryan" };
        }
        [TraceAspect]
        public static void SaveCustomer(Customer customer)
        {
            Console.WriteLine("SaveCustomer is executing");
            System.Threading.Thread.Sleep(3000); //taking it's time... again...
        }

        public class Customer
        {
            public string FirstName { get; set; }
            public string LastName { get; set; }
        }
    }

As you can see ,there’s not much to it but adding the [TraceAspect] attribute to our class. When executing this little sample you’ll get the expected output:

The magic is performed for us here by invoking the post-compiler that it uses to weave the AO code into the IL

So that when you use Reflector (I almost added Lutz’s name here…) to look at your final assembly, you can see the weaving that has been done:

PostSharp is a very powerful tool and it’s capable of a lot more than what I can show here. It’s a bit of a one-man band behind it, but it’s creator Gael Fraiteur has done a fine job of releasing it to open source and it’s actually got good documentation. Check it out on www.postsharp.org

 

September 3, 2008

Unit testing part 3: Design for testability

Filed under: .Net,Microsoft,Unit Testing — Freek Leemhuis @ 11:20 am

This is part 3 in a series of posts about unit testing using Visual Studio. Part 1 and part 2 focused mainly on the MS Test environment within Visual Studio.

When writing unit tests, whether one subscribes to Test Driven Development practices or not, often forces one to consider the design of the code under test. This is probably where the term ‘Design for testability’ comes from: not all code is easily testable, and you should therefore design your code so it can easily be tested. I’ve always found the term strange, as the ultimate purpose of code is not that it is tested, but that it works and is maintanable. If you can manage loose coupling through OO principles such as the Single Reponsibilty Priciple and the Open Closed Principle, then the code is testable as a result of good design.

Or, as Uncle Bob says:

“The act of writing a unit test is more an act of design than of verification”

Having said that, what is generally meant by design for testability?

Consider the following class:

public class Clown
{
   public void TellAJoke()
    {
       Joke joke = new JokeRepository().GetRandomJoke();
       Console.WriteLine(joke.question);
       Console.Read();
       Console.WriteLine(joke.punchline);
       DrumRoll.Play();
   }
}

Just a few lines of code, and we’re already in a pickle. The code is not easily testable, and the problem is tight coupling. You can not test the TellAJoke method without also invoking the JokeRepository, Joke and DrumRoll class.

The GoF : Programming too an interface, not an implementation

(It’s actually in the first chapter, so you might catch it just before you fall asleep :-))

Let’s take this advise to heart and do some refactoring:

public class Clown
{
    ISoundEffect _soundeffect;
    IDisplay _display;
    public Clown(IDisplay display, ISoundEffect soundeffect)
    {
        _soundeffect = soundeffect;
        _display = display;
    }
    public void TellAJoke(IJoke joke)
    {
        _display.Show(joke.question);
        _display.Show(joke.punchline);
        _soundeffect.Play();
    }
}

We have defined interfaces for Joke, Display and Soundeffect. We can therefore call on the might of polymorphism if we want to use alternative methods of displaying text or using sound effects (we may want to switch to trumpets in a future iteration!). In this refactoring we are using dependancy injection to remove the coupling with Drumroll(ISoundeffects) and Console (IDisplay) by adding them as an argument to the constructor.

Note that we have also defined an interface for Joke, and added the joke as an argument to be passed into the method. Because our JokeRepository retrieves jokes from a database backend, we can not really use the repository in testing: we want to keep our tests lean and mean, and that means no database trips!

We can for the purpose of testing the TellAJoke method now subsitute the IJoke parameter with our own stub:

[TestMethod]
public void Clown_TellAJoke_Test()
{
    Clown pipo = new Clown(new ConsoleDisplay(), new DrumRoll());
    //create Joke stub
    Joke jokeStub = new Joke
    {
        question = "How many developers does it take to change a lightbulb?",
        punchline = "They won't. It's a hardware problem"
    };
    //redirect console output
    var  writer = new StringWriter();
    Console.SetOut(writer);
    //call the method under test
    pipo.TellAJoke(jokeStub);
   
    string output = writer.ToString();
    Assert.IsTrue(output.Contains(jokeStub.question));
    Assert.IsTrue(output.Contains(jokeStub.punchline));
   
}

Note that we are still using the console as the method to display text, but if we decide to change that, all we would need to do is implement the IDisplay interface on an alternative implementation. The key here is that we don’t have to change our Clown class to change the implementation of display or soundeffect.

Are you mocking to me?

So what about mocking? What’s that all about? The seminal reference in this regard is Martin Fowler’s article Mocks aren’t stubs, where he explains the difference between the two. Basically, if you are using stubs you’re usually testing state, whereas mocks allow you to test the interaction between objects.

Where we have in the previous example used a stub for the Joke class, we have left out tests for the soundeffect. What we would like to test is that the soundeffect sounds when delivering the punchline.  Let’s create a stub to substitute the soundeffect – we don’t want actual sound effects when running our unit tests, fun as that might be…

 

public class SoundEffectStub:ISoundEffect
{
    public void Play()
    {
        Console.WriteLine("Sound!");
        Console.Read();
    }
}

And we could test that the ‘sound’ gets played by redirecting the console output as before. However, we are testing state, and state that is introduced by our stub. An alternative here is to use a mocking framework that will allow us to test the interaction of these objects. The only thing we want to test is that the soundeffect gets played, and it’s for this kind of behavior verification that mocking frameworks really shine.

Typemock is a popular mocking framework in the .Net space, and we can use it for our little example as so:

 [TestMethod]
public void Clown_TellAJoke_Test()
{
    MockManager.Init();
    Mock mockSound = MockManager.Mock(typeof(SoundEffectDrumRoll));
    Clown pipo = new Clown(new ConsoleDisplay(),new SoundEffectDrumRoll());

    //create Joke stub
    Joke jokeStub = new Joke
    {
        question = "How can you tell when a programmer has had sex?",
        punchline = "When he's washing the pepper spray out of his eyes."
    };

    //redirect console output
    var  writer = new StringWriter();
    Console.SetOut(writer);

    //set expectations
    mockSound.ExpectCall("Play");

    //call the method under test
    pipo.TellAJoke(jokeStub);
   
    string output = writer.ToString();
    Assert.IsTrue(output.Contains(jokeStub.question));
    Assert.IsTrue(output.IndexOf(jokeStub.punchline) > output.IndexOf(jokeStub.question));

    mockSound.Verify();

}

Typemock is different from other mocking frameworks in that it uses the .Net profiler API to monitor and intercept program execution. This makes it very powerfull, because it is not tied to the same restrictions as other Mock frameworks. Rhino Mocks or similar solutions require that you code to interfaces, and any method you want to mock must be marked as virtual. Some people have even voiced concerns that Typemock is too powerfull, in that is does not force you to design for testability. I think Roy has put that argument to bed with his reply.

In the above example we can use the original SoundEffectDrumRoll implementation, and by setting up the expectation to the Play method Typemock will make sure the method does not actually get executed, and by using the Verify method we make sure the method was in fact called.

Especially when working with legacy code you will find that Typemock offers powerfull features to allow you to test code that would otherwise not be testable without refactoring. On the other hand, if you write new code and use Typemock, you can ignore some of the design issues we’ve talked about here. But that does not mean it’s the right thing to do.


August 18, 2008

JP’s contest

Filed under: .Net,Reading — Freek Leemhuis @ 9:47 am

Jean-Paul Boodhoo is an well-known authority on agily methodologies and patterns in the .Net community. He is author of many published articles, and I have tremendously enjoyed the series of webcasts he has produced for DNRtv on Demystifying Design Patterns.

JP also organizes the Nothing but .Net training bonanza’s, and reading the description of these bootcamp style events have always had me thinking of ways to be able to sign up for one of them. So when he opened a competition for submitting stories that would foster the passion in developers I put in a few words. And lo and behold, I made it in the top 5! So now you can vote for me or any of the other contestants to give your favorite author the chance to enjoy a great week of training.

UPDATE:

So I did not win the top price, so I guess I need to persuade my company to stump up the money for the training course! I did come in third, which JP is generous enough to award with a 130 dollar Amazon voucher. I’m planning to spend it on this set of books:

Feather’s book has had some rave reviews, and I’m curious what tips and practices it offers, especially since I will be working on a lot of legacy code in the near future it seams…
This book apparantly has set a new standard in the publishing of programming books. It’s got syntax code coloring! And it’s a very good book on WPF, which I want to dive into a bit more
McConnel is my favorite author, and this is one of his books I have yet to get my hands on
Uncle Bob is another one of my favorites, and this brand new tome will no doubt contain many a pearl of wisdom. Dispite the ugly cover.

Thanks to JP for stacking up my reading list!

August 13, 2008

Devdays 2008 videos

Filed under: .Net,Events,Microsoft — Freek Leemhuis @ 8:20 pm

Just a quick note here to point those of you who are interested to the online videos of the Devdays 2008

June 2, 2008

DevDays 2008 impressions

Filed under: .Net,ADO.Net Data Services,Entity Framework,Events,Microsoft — Freek Leemhuis @ 1:31 pm

Keynote

The keynote this year was titled ‘Why Software Sucks’, by .Net professor David Platt. I’ve missed most of it while lining up to get tickets (thanks Mark ;-)) , but it was basically the same session that Platt has been delivering for a number of years now, most recently in TechEd Barcelona in 2007, and I was a bit surprised when I found out this talk was now promoted to keynote for the DevDays. Must have been a slow day at the office for original content or new announcements….
If you’ve not seen Platt’s talk before, it’s pretty entertaining. You can watch it (from a similar session) online here.

Silverlight 2.0

The session from Daniel Moth was about Silverlight 2.0. Where previous versions of Silverlight were all about media, video delivery etc, you could only program in javascript to make things happen. With version 2.0 you can finally write managed code to run in the browser. This, combined with the power of XAML makes for a very compelling platform to deliver RIA’s (most self-respecting conferences these days includes a RIA (Rich Internet Application) track). Silverlight 2 of course was announced during Mix, so if you want to check it out go watch the sessions on Silverlight on sessions.visitmix.com. They’ve recently redone the sessions so that the streaming includes the presenter as well as a separate stream to show the slides and demo’s.

The ADO.Net Entity Framework

The ADO.Net Entity Framework session from Mike Taulty was a good introduction into the subject. Mike pointed out a new website www.datadeveloper.net where you can find news, tutorials and other resources on new data technologies such as the Entity Framework and the ADO.Net Data Services. I was a bit puzzled when Mike spend considerable time of his session on how you can still use the old-fashioned ADO api (datareader, command) to program against the EF. I can think of only a small number of cases when you’d want to do that.
Check out this webcast for more details on the EF.

WCF on the Web

For me, the most interesting session of the day was delivered by Peter Himschoot, who showed what additional work has been done in WCF for the web in version 3.5. More specifically, WCF now supports JSON and REST. It’s interesting to see that a framework like WCF has been designed to a high enough level of abstraction that, while it was build when services were all very much soap-oriented, it has now been extended to include new concepts like JSON and REST.

ASP.Net MVC Framework

On the Friday, Alex Thissen kicked off with an introduction to the MVC framework. The MVC framework will be an alternative to the current asp.net webforms model. It allows the programmer to control the HTML markup, rather then having it generated by user controls. It does away with postback and viewstate, so you get a much cleaner model, that allows for better Separation of Concerns and better testability. As always, Alex is very thorough and I was impressed to see he managed to sprinke Unity and Moq in his demo without loosing the audience.

LINQ to SQL

Next up was Anko Duizer who discussed various options to include LINQ to SQL in your architecture. Do you regard LINQ to SQL as your data layer, or do you just use it as part of your datalayer? This was a good follow-up to Anko’s previous introductory sessions on LINQ to SQL, and it addressed some of the difficulties that you can run into when you need to figure out the best practices for leveraging LINQ and LINQ to SQL.

ADO.Net Data Services

Mike Taulty then had another session, this time on the ADO.Net Data Services (codename Astoria). Using this technologie you can take a data model like LINQ to SQL or an Entity Framework model, and make the classes available through REST-based services. The framework will be made available in service pack 1 for Visual Studio 2008, currently in beta.

Rosario

Marcel de Vries showed some of the new features of Rosario, the upcoming new version of Visual Studio and the .Net framework. His talk focused mostly on Team System. The primary goal of the new Rosario features is to bring together the three main stakeholders of a software project — business and IT governance, IT operations and development. Some of the new features include:

historical debugging: a new testrunner application allows a tester to record test runs, which can then be replayed on a programmer’s machine, thereby reproducing the bugs that the tester has stumbled upon, but also creating a debug session where there previously was none!
This should get rid of some ‘but it works on my machine’ discussions….

Functional testing– codename Camano- : a test manager for running functional tests. It provides test execution assistance, workflow, rich bug logging and reporting.

Marcel also showed some features of the Team Architect edition, that now includes… UML support! Ever wished you could generate a sequence diagram from existing code? I had noticed this through Clemen’s blog and I’m a bit puzzled to see Microsoft performing this U-turn, where they have previously stayed well away from anything related to UML. I’m intrigued enough to go and try this out to see how valuable the additions will be.

Dynamic languages and the DLR

Finally, I managed to catch Harry Pierson‘s session on dynamic languages and the Dynamic Language Runtime (DLR).  I have a fascination for the differences between different programming languages and paradigms, and the initiative from Microsoft to enable the use of existing dynamic languages on the .Net platform is a very interesting one.

The question many people who are using a statically typed language on the .Net platform will pose is: why would I want to (also) use a dynamic language. Harry really brought it home to me: with the DLR and the supported languages Microsoft aims for the developer that is currently using Python or Ruby, and get them on board by making it easy for them to switch to the Microsoft platform.

So, if you’re not currently using dynamic languages, should you care about this stuff? Well, if you are a believer in polyglot programming you should. This is the idea that within an application you would use multiple languages, and you select the language that is best fits the particular concern you’re trying to address. For example, in a Model-View-Controller application, you would write the view in HTML and javascript, the controller using a dynamic language like IronPython and the model in a statically typed language like C#. Read the chapter on polyglot programming in the recently released Thoughtworks Anthology for more information.

One interesting thing to note on the DLR is that the original plan for the DLR was to release it with four languages that were going to be supported: IronRuby, IronPython, JavaScript and VBX, where the last one was a new dynamic variant of Visual Basic. The last one has now apparently been dropped, and the DLR will be released initially with just the first three languages. It looks like Microsoft has not yet made its mind up considering the future of VB. 

When .Net first came out, the differences between the implementations of VB and C# were surprisingly few, and the choice of customers between these two would invariably hinge on the history and familiarity of their existing programmers base, rather then on the merits of the particular language.  With the recent additions in VB like xml literals, these languages seem to start to drift apart again, and I would very much like to see people preferring one over the other because they like the language features better, not just because it’s what they are used to.

So the question is, will Microsoft rediscover VB as a dynamic language? That’s why I was curious to see how the VBX implementation for the DLR was taking shape…. I spoke to Harry about this, and he was rather tight-lipped about it but hinted that we might get an announcement on these issues at the upcoming PDC.

And so..

All session on DevDays were recorded on video, so I’ll keep you posted when materials will be made available online. If you attended, let me know what you thought…

 

May 27, 2008

Microsoft MCPD Certification for .Net 3.5

Filed under: .Net,Certification,Microsoft — Freek Leemhuis @ 6:11 pm

Microsoft has recently published more details on the certification tracks for the .Net framework 3.5.
Most of my colleagues are or try to become MCPD for .Net 2.0. Below are the details of what you will need to do to get certifiied on the .Net 3.5 platform:

There’s different MCPD (Microsoft Certified Professional Developer) tracks : You’re either a Windows, ASP.Net or an Enterprise developer. For the ASP.Net MCPD track here’s what you will need to do:

1. Pass the 70-536 exam: Application Development Foundation.
If you’re currently MCPD you already hold this exam.

2. Certify as MCTS: .NET Framework 3.5, ASP.NET Applications
You do this by passing a choice of 2 out of the following exams:
Exam 70-502: TS: Microsoft .NET Framework 3.5, Windows Presentation Foundation Application Development
exam available.

Exam 70-503: TS: Microsoft .NET Framework 3.5, Windows Communication Foundation Application Development
exam available.

Exam 70-504: TS: Microsoft .NET Framework 3.5, Windows Workflow Foundation Application Development
exam available.

Exam 70-505: TS: Microsoft .NET Framework 3.5, Windows Forms Application Development
expected August 2008

Exam 70-561: TS: Microsoft .NET Framework 3.5, ADO.NET Application Development
expected June 2008

Exam 70-562: TS: Microsoft .NET Framework 3.5, ASP.NET Application Development
expected June 2008

The last one is mandatory for the ASP.Net track, so you’ll need that one plus a choice of one out of the others above. Some of the exams are not yet available, so if you want to take one now I’d start with the WCF exam.

After you’ve passed the two exams you can upgrade your MCTS certification to MCPD by taking the last exam:

3. Pass the MCPD 70-564 exam.

Exam 70-564: PRO: Designing and Developing ASP.NET Applications using Microsoft .NET Framework 3.5
expected Dec 2008

There will be an upgrade exam:
Exam 70-567: Upgrade: Transition your MCPD Web Developer Skills to MCPD ASP.NET Developer 3.5 (available soon)

Our experience in running the 2.0 track has shown that people have had better success rates by taking the individual exams, so that’s what I would advise.


Geek night out

Filed under: .Net,Events,Programming — Freek Leemhuis @ 12:01 pm

I went for a ‘geek night out’ yesterday to the Language Café at Sokyo. It turned out to be a very interesting evening. First of, Rob Vens spoke about the evolution of programming languages. Rob’s an interesting cat: rather than focusing on technical details he will speak at length on topics as General Semantics, Science Fiction, technology in general and a host of other subjects. Rob likes to get on his soap box and talk about his favorite subjects, and it made for an interesting tour through history. Plus we got a host of reference reading material.

One of the key points I’ve taken from the talk was: the near future in programming is all about ‘Back to the future’: most innovation that will take place will be driven by ideas that have been explored previously in earlier platforms and languages. Rob’s idea is that in the beginning of computer science people were more open-minded and ideas more innovative, and the focus has since shifted to making small improvements, rather than following big ideas.

When we broke up into different sessions, with tracks on Java, C#, Erlang and Smalltalk, this idea was confirmed by the subjects that were discussed regarding the future directions on these platforms. Both the Java and C# track discussed how parallel computing will be brought into the language. This is an area where Erlang for example has enabled programmers to do this for over 10 years. Pieter Joost, the C# track leader, has a write-up on the parallel extensions subject here.

The other example for future directions in C# was the idea of Design By Contract, available as Spec #, a Microsoft Research project. This style of programming has been around in Eiffel since the 80’s, so again it’s nothing new per se, but it’s interesting to see how we could use it to improve our code when applying the principles to the ‘modern’ languages on the .Net platform.
In the current download you could write statements in C# like

class ArrayList { void Insert(int index , object value)
requires 0 <= index && index <= Count otherwise ArgumentOutOfRangeException; requires !IsReadOnly && !IsFixedSize otherwise NotSupportedException; { . . . }[/sourcecode] The keywords requires, otherwise etc. are used to extend the signature of the method to include a contract that specifies the values that are allowed, not allowed, exceptions that are returned etc. Read the research paper on Spec # for full details.Voices from the Microsoft camp have stated that these extensions are not likely to be released as extensions to the C# language, but rather as additions to the framework, so you can imagine this will be made available as attributes and asserts rather than the keywords you can use in the current download. It will be interesting to see how this will affect the process of Test-Driven-Development: instead of writing your test first, you would write your contract first.

Will we move from TDD to DBC?

May 8, 2008

Slides for the Océ presentation

Filed under: .Net,Events,Linq,Speaking — Freek Leemhuis @ 12:55 pm

For those who attended my LINQ talk yesterday at the Océ headquarters, thanks for coming. Find below the slides used for this presentation. Included are some resources (links, book recommendations) that I did not get around to mention. I guess 2 hours was not enough…

ado-vnext LINQ presentatie Océ 

April 20, 2008

Unit Testing in Visual Studio 2008 – part 1

Filed under: .Net,Unit Testing — Freek Leemhuis @ 2:54 pm
Tags: , ,

Unit testing – does it need an introduction?

Unit tests can massively improve the maintainability of any application. Bugs are found right after they get introduced, and refactoring code is done with great confidence. As a consultant, I’m participating in many different development teams. In the last year or so I’ve done a number of audits and coaching sessions on client sites. One thing that has struck me is how different shops approach unit testing varies enormously. Some have been doing it for years, and are very well versed in it. Others are struggling to integrate it into their practices, and others still have not made any strides at all. So, in my experience, the practice of unit testing is not so ubiquitous as you might expect. I’ve also found that, as is writing software in general, writing good unit tests is hard to do. It requires insight and experience, and for those who start out it can be a frustrating experience. My first set of unit tests were a fragile bunch. Sometimes they would break by dozens at a time, other times they would break where the actual code would be running fine.

So the answer to the first question, ‘does it need an introduction’ would be yes, plenty of times it does! That’s why I’ve decided to dedicate a number of posts on the art of unit testing. This post is the first in the series, and here I’ll focus on the environment. I’ll introduce the build-in unit testing framework for Visual Studio 2008. In later posts I’ll talk more about test patters, design for testability, mocking and code coverage.

Introducing the Visual Studio 2008 testing framework

Microsoft has provided MS Test for unit testing in Visual Studio since version 2005. These tools were only available if you were running one of the Team system editions. Luckily, since then Microsoft has come around and for the 2008 version unit testing is also available in the Professional Edition (but not Standard Edition). Things like code coverage analysis remain limited to the Team Suite editions. Find a comparison of the features for different editions here.

First of all, you will need to add a separate Test Project to your solution. Unit test will not be stored between your source classes, but always kept in a separate test project. Select your solution and click on File, Add, New Project

Note that by default, a number of items are created in your test project.

They include a AuthoringTests.txt: a text file containing general information about testing, ManualTest1.mht: a template-type ManualTest for adding functional test descriptions to your project (I’ve never met anybody who did), and a blank unit test UnitTest1.cs. Most likely you will want to change the default settings, so you don’t have to go and manually delete these items. Luckily you can do that through the options menu by clearing the checkboxes displayed below.

We add a reference in the test project to the project we’re trying to test. We’re now ready to add a test. Let’s assume that we have a piece of code like the following:

partial void OnCompanyNameChanged()

       {

           if (CompanyName.Length > 20)

           {

               CompanyName = CompanyName.Substring(0, 20);

           }

       }

We’re using Linq to SQL and the Northwind database, where I’ve added a partial class for the Customer. In the partial class I can add validation code like the one above. The company name can contain only 20 characters, and while we’re validating things like that in the user interface, it’s good practice to validate it server-side as well. In this case, we’re not throwing an exception but simply take the first 20 characters if the name provided is longer than 20 characters.
Now we can add a test to the test project. Click on Add, New Item, Class and add the new class. For naming, there’s a number of naming conventions you can choose from. I usually name the test class as <ClassToTest>_Tests.cs, so in this case I’d name it NwindDataContext_Tests.cs.
In the test class, you can have a number of test methods, which can test one or more of the methods in the class under test. This way, you will have a 1:1 between your classes and your test classes. Of course, you can have (and often you’d want to have) more than 1 test method for each method.
Since we’ve added a plain class, we need to introduce the namespace for the testing framework:

using Microsoft.VisualStudio.TestTools.UnitTesting;

We have to define the test class and methods as public. We decorate the class with the [TestClass] attribute, and add a new testmethod. Methods are decorated with the [TestMethod] attribute.

[TestClass]

    public class Customer_Tests

    {

        [TestMethod]

        public void OnNameChanged_MoreThan20Chars_TakesFirst20Chars()

        {

            string testName = “abcdefghijklmnopqrstuvw”;

            Customer customer = new Customer();

            customer.CompanyName = testName;

            string expectedName = “abcdefghijklmnopqrs”;

            Assert.AreEqual(expectedName, customer.CompanyName,

                “OnNameChanging should take only first 20 characters”);

        }

    }

For the test methods I’m using the naming convention

<MethodUnderTest>_<StateInput>_<ExpectedResult>.

I’ve used another convention in Unit testing, in that I’ve set up an expected value and I’m comparing that with the actual value. If we then use Assert.AreEqual to compare the two, the framework infers that the first value is the expected result, and the second the actual result. The third parameter of Assert is used to specify a detailed description that is displayed in the event that the test fails.
If we run the test, (right-click, choose Run Test), we’ll see in the test result window that the test fails:

Double-clicking on the result brings up the details of the test run:

In the error message we see that the expected value differed from the actual value, and we see the description returned that we have provided in the test.
Of course, the reason why the test fails is a simple counting error: I’ve included only 19 characters in the test condition, so as the test results indicate, I’ve missed out the letter ‘t’.
Fixing this by appending the letter in the test method results in Pass:

We’ve seen a number of test windows, Clicking on the Test Menu reveal there are more testing related windows :

The Test View Window

The Test View Window is a list of all test methods in all test projects of the solution. From this window, you can make a selection of the tests you want to run, and then choose to run or to debug the selected tests.

If you have a large number of tests, as is the case here (it’s the Enterprise Library) you will sometimes want to filter the list. Here I’ve filtered on the search term ‘isolatedstorage’ to narrow down the list of test methods.

The Test List Editor

If you have a large number of tests, it is more convenient to partition the tests into separate groups. The Test List Editor is where you assign tests to particular groups.

For example, if I’m working on Caching, it can be convenient to move all tests related to caching to a separate list, so you can easily pick out a group of tests you want to run.
Personally, I don’t use the Test List Editor much. The grouping does not automatically reflect the physical grouping you’ll have in your test project, which can be confusing. Apart from that, if you’re coding it is my belief that it’s better to run all unit tests, and not a subset, since the coding might introduce an effect in the codebase that is covered by unit tests in other test lists.

The test results window.

We have already seen this window in the previous paragraphs. In this window, the results of your last run are displayed. The framework will keep by default the last 25 test runs, and you can select the results of a previous run from the dropdown box.

Back to the test

Okay, why have I not used the separate Unit test template to add a test? Well, I’ve wanted to show you the ‘bare bones’ of what makes up a unit test. If we select the template for a unit test, we get a wizard-style dialog that forces us to choose the code that we want to test. This is of course contrary to the Test Driven Development paradigm, where you want to write your tests before you write the code. Let’s run the wizard now and see where it takes us. Select New, Unit Test. You will see the following dialog displayed.

You can use the Settings button to call up the following window where you can name the new test classes and methods:


In this case, we have not changed the default names and end up with CustomerTests test class(that’s not too bad) and a OnCompanyNameChangedTest test method. This is not according to the naming convention we had in mind, so if you’re using this option beware to rename your methods so they express exactly what you’re intending to test.
In addition, there’s a fair amount of code that’s been generated in our test class. For starters, there’s this bit:

private TestContext testContextInstance;
///
///Gets or sets the test context which provides
///information about and functionality for the current test run.
///
public TestContext TestContext
{
  get
  {
  return testContextInstance;
  }
  set
  {
  testContextInstance = value;
  }
}

The TestContext has methods like TestDir (returns the path to the test folder) and, in the case of an ASP.Net test, RequestedPage(returns a reference to the aspx page). It’s nice to have these options, but in most test cases it’s a case of YAGNI and therefore clutter.
Next up: a region called Additional test attributes. This contains some very useful suggestions:

#region Additional test attributes
  //
  //You can use the following additional attributes as you write your tests:
  //
  //Use ClassInitialize to run code before running the first test in the class
  //[ClassInitialize()]
  //public static void MyClassInitialize(TestContext testContext)
  //{
  //}
  //
  //Use ClassCleanup to run code after all tests in a class have run
  //[ClassCleanup()]
  //public static void MyClassCleanup()
  //{
  //}
  //
  //Use TestInitialize to run code before running each test
  //[TestInitialize()]
  //public void MyTestInitialize()
  //{
  //}
  //
  //Use TestCleanup to run code after each test has run
  //[TestCleanup()]
  //public void MyTestCleanup()
  //{
  //}
  //
  #endregion

The content of this we’ll get to later. However, do we really want these instructions sitting in every test class? More clutter.
Finally, we get to the meat. There’s an actual test method generated:

  /// <summary>

        ///A test for OnCompanyNameChanged

        ///</summary>

        [TestMethod()]

        [DeploymentItem(“nwind.BLL.dll”)]

        public void OnCompanyNameChangedTest()

        {

            Customer_Accessor target = new Customer_Accessor();                           

            target.OnCompanyNameChanged();

            Assert.Inconclusive(“A method that does not return a value cannot be verified.”);

        }

We have a skeleton test method, and can start thinking how to rename and build this into the actual test(s) that we want to perform. Note that a [DeploymentItem] is assigned, which you really only need if you want to run your tests in a separate deployment folder.

So are my private parts exposed now?

I’m glad you noticed. This is a little trick that the framework has played in order to allow the testing of private and internal methods. It uses reflection to create a shadowed copy of the code under test, and runs the tests against that rather than against the actual code (Customer_Accessor rather than Customer). When using the New Unit Test wizard, this accessor assembly is automatically created, regardless of the existence of private members.

Now that we’ve explored the MS Test environment, we can dive into the testing itself. This I’ll save for the next post.


 

April 16, 2008

Using WCF WSHttpBinding without installing .Net framework 3.0

Filed under: .Net — Freek Leemhuis @ 9:38 pm

The naming of Winfx as .Net framework 3.0 has caused a lot of misunderstanding. Recently I had to link up some WCF services to a .Net 2.0 project. Fine, just upgrade to 3.0 you’d say; 3.0 is only adding some stuff, and not replacing anything in the 2.0 runtime, right? Well, in this case the project had a dependancy on a third party CMS system, and that particular vendor came out with the party line that I’ve seen used more often: we support 2.0, but not 3.0. What this really means is: we’re not up to speed on these new technologies, and we don’t know what 3.0 actually is, but since we’ve not got any experience with it, we can’t say that we support it. Fair enough.
So that left me wondering how to best deploy the WCF services. Of course, I could just use basicHTTPBinding and use plain web service references to generate proxies, but in this case the services had security requirements that are best covered by using certificates. You can configure WCF services with certificates if you use WSHttpBinding, but not when you use BasicHTTPBinding.

One way to solve this would be to use the WSE (Web Service Enhancements) library, but having used WCF I figured I’d try to see what needs to be installed for WCF to function properly without having the 3.0 framework installed. It turned out you can do this relatively easy by distributing the WCF dll’s with the solution.
I added the following dll’s to the solution:

System.Servicemodel.dll
System.Runtime.Serialization.dll
System.Identitymodel.dll
System.Identitymodel.Selectors.dll
SMDiagnostics.dll
Microsoft.Transactions.Bridge.dll

I also had to pull a number of tags that normally sit in the 2.0 Machine.Config file and place them in the web.config file.

<sectionGroup name="system.runtime.serialization" type="System.Runtime.Serialization.Configuration.SerializationSectionGroup, System.Runtime.Serialization, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089">
   <section name="dataContractSerializer" type="System.Runtime.Serialization.Configuration.DataContractSerializerSection, System.Runtime.Serialization, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/>
  </sectionGroup>

<sectionGroup name="system.serviceModel" type="System.ServiceModel.Configuration.ServiceModelSectionGroup, System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089">
   <section name="behaviors" type="System.ServiceModel.Configuration.BehaviorsSection, System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/>
   <section name="bindings" type="System.ServiceModel.Configuration.BindingsSection, System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/>
   <section name="client" type="System.ServiceModel.Configuration.ClientSection, System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/>
   <section name="comContracts" type="System.ServiceModel.Configuration.ComContractsSection, System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/>
   <section name="commonBehaviors" type="System.ServiceModel.Configuration.CommonBehaviorsSection, System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" allowDefinition="MachineOnly" allowExeDefinition="MachineOnly"/>
   <section name="diagnostics" type="System.ServiceModel.Configuration.DiagnosticSection, System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/>
   <section name="extensions" type="System.ServiceModel.Configuration.ExtensionsSection, System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/>
   <section name="machineSettings" type="System.ServiceModel.Configuration.MachineSettingsSection, SMDiagnostics, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" allowDefinition="MachineOnly" allowExeDefinition="MachineOnly"/>
   <section name="serviceHostingEnvironment" type="System.ServiceModel.Configuration.ServiceHostingEnvironmentSection, System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/>
   <section name="services" type="System.ServiceModel.Configuration.ServicesSection, System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/>
  </sectionGroup>
  <sectionGroup name="system.serviceModel.activation" type="System.ServiceModel.Activation.Configuration.ServiceModelActivationSectionGroup, System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089">
   <section name="diagnostics" type="System.ServiceModel.Activation.Configuration.DiagnosticSection, System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/>
   <section name="net.pipe" type="System.ServiceModel.Activation.Configuration.NetPipeSection, System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/>
   <section name="net.tcp" type="System.ServiceModel.Activation.Configuration.NetTcpSection, System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/>
  </sectionGroup>
and also this bit:

<system.serviceModel>
  <extensions>
  <behaviorExtensions>
  <add name=”persistenceProvider” type=”System.ServiceModel.Configuration.PersistenceProviderElement, System.WorkflowServices, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35″/>
  <add name=”workflowRuntime” type=”System.ServiceModel.Configuration.WorkflowRuntimeElement, System.WorkflowServices, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35″/>
  <add name=”enableWebScript” type=”System.ServiceModel.Configuration.WebScriptEnablingElement, System.ServiceModel.Web, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35″/>
  <add name=”webHttp” type=”System.ServiceModel.Configuration.WebHttpElement, System.ServiceModel.Web, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35″/>
  <add name=”Microsoft.VisualStudio.Diagnostics.ServiceModelSink.Behavior” type=”Microsoft.VisualStudio.Diagnostics.ServiceModelSink.Behavior, Microsoft.VisualStudio.Diagnostics.ServiceModelSink, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a”/>
  </behaviorExtensions>
  <bindingElementExtensions>
  <add name=”webMessageEncoding” type=”System.ServiceModel.Configuration.WebMessageEncodingElement, System.ServiceModel.Web, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35″/>
  <add name=”context” type=”System.ServiceModel.Configuration.ContextBindingElementExtensionElement, System.WorkflowServices, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35″/>
  </bindingElementExtensions>
  <bindingExtensions>
  <add name=”wsHttpContextBinding” type=”System.ServiceModel.Configuration.WSHttpContextBindingCollectionElement, System.WorkflowServices, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35″/>
  <add name=”netTcpContextBinding” type=”System.ServiceModel.Configuration.NetTcpContextBindingCollectionElement, System.WorkflowServices, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35″/>
  <add name=”webHttpBinding” type=”System.ServiceModel.Configuration.WebHttpBindingCollectionElement, System.ServiceModel.Web, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35″/>
  </bindingExtensions>
  </extensions>
</system.serviceModel>

 

Worked like a charm. One thing to keep in mind with this solution is that these tags are not allowed in but the Machine.Config AND your solutions web/app.config, which means that if you install the 3.0 or 3.5 framework on the server after deployment, you will have to take the tags out again.

Next Page »

Blog at WordPress.com.