Mark Needham

Thoughts on Software Development

Archive for March, 2010

Saved from an episode of bear shaving

with one comment

As part of our continuous integration build we have a step in the build which tears down a Windows service, uninstalls it and then reinstalls it later on from the latest files checked into the repository.

One problem we’ve been having recently is that despite the fact it should already have been uninstalled a lock has been kept on the log4net dll in our build directory, a directory that we tear down as one of the next steps.

It was a bit of a sporadic failure and whenever we went on the build machine and checked for file handles with Process Explorer the log4net dll never showed up on the list.

I wrongly assumed that it couldn’t be getting locked by the Windows service because the Windows service would already be gone by that stage.

We therefore started on a bear shaving expedition which involved trying to make use of Unlocker and File Assassin to release the file handle so that we’d be able to delete the directory.

We weren’t able to get either of those tools to do the job from the command line and luckily at this stage another colleague had the idea to check whether or not the Windows service was actually immediately removed in Task Manager when we tore it down.

After realising that it wasn’t we were able to address this problem and put a step in the build to look for that process and kill it as part of the build.

I guess the real solution to this problem would be for the uninstall to block until it was properly uninstalled but this seems to be the closest we can get to addressing the cause rather than the effect.

Written by Mark Needham

March 30th, 2010 at 6:57 am

Reading Code: underscore.js

with 9 comments

I’ve been spending a bit of time reading through the source code of underscore.js, a JavaScript library that provides lots of functional programming support which my colleague Dave Yeung pointed out to me after reading my post about building a small application with node.js.

I’m still getting used to the way that JavaScript libraries are written but these were some of the interesting things that I got from reading the code:

  • There are a couple of places in the code where the author has some code which runs conditionally and this is achieved by including that expression on the right hand side of an ‘&&’.

    For example on line 129 in the ‘filter’ function:

    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    
      // Return all the elements that pass a truth test.
      // Delegates to JavaScript 1.6's native filter if available.
      _.filter = function(obj, iterator, context) {
        if (nativeFilter && obj.filter === nativeFilter) return obj.filter(iterator, context);
        var results = [];
        each(obj, function(value, index, list) {
          iterator.call(context, value, index, list) && results.push(value);
        });
        return results;
      };

    I would probably have used an if statement to check the result from calling ‘iterator’ but this way is more concise and pretty neat.

    The same type of thing is done on line 150 in the ‘every’ function:

    145
    146
    147
    148
    149
    150
    151
    152
    153
    
      _.every = function(obj, iterator, context) {
        iterator = iterator || _.identity;
        if (nativeEvery && obj.every === nativeEvery) return obj.every(iterator, context);
        var result = true;
        each(obj, function(value, index, list) {
          if (!(result = result && iterator.call(context, value, index, list))) _.breakLoop();
        });
        return result;
      };

    The result is collected and the loop will also exit if the value of ‘result’ is ever false which is again a cool way to organise code.

  • It’s also quite cool that you can assign a value to a variable from within a conditional – this isn’t possible in any of the other languages that I’ve used previously as far as I’m aware.

    It’s even more evident in the ‘max’ function:

    191
    192
    193
    194
    195
    196
    197
    198
    199
    
      _.max = function(obj, iterator, context) {
        if (!iterator && _.isArray(obj)) return Math.max.apply(Math, obj);
        var result = {computed : -Infinity};
        each(obj, function(value, index, list) {
          var computed = iterator ? iterator.call(context, value, index, list) : value;
          computed >= result.computed && (result = {value : value, computed : computed});
        });
        return result.value;
      };

    ‘result’ is conditionally assigned on line 196 but only if the computed value is greater than the current computed value. Again an if statement is avoided.

    Another interesting thing about this function is that it specifically checks the type of the ‘obj’ passed in which reminded me about the discussion around Twitter having those sorts of checks in their Ruby code around a year ago. I guess that type of thing would be more prevalent in library code than in an application though.

  • I hadn’t come across the !! construct which is used to turn a JavaScript expression into its boolean equivalent:
    506
    507
    508
    
      _.isArray = nativeIsArray || function(obj) {
        return !!(obj && obj.concat && obj.unshift && !obj.callee);
      };

    Without using ‘!!” the expression would return ‘undefined’ in the case that one of those functions on ‘obj’ was not set. ‘!!’ forces the return value to be ‘false’.

  • Another technique used in this code base is that of dynamically adding methods to the prototype of an object:
    666
    667
    668
    669
    670
    671
    672
    673
    
      // Add all mutator Array functions to the wrapper.
      each(['pop', 'push', 'reverse', 'shift', 'sort', 'splice', 'unshift'], function(name) {
        var method = ArrayProto[name];
        wrapper.prototype[name] = function() {
          method.apply(this._wrapped, arguments);
          return result(this._wrapped, this._chain);
        };
      });

    This is quite a cool use of meta programming although it isn’t initially obvious how those functions end up on the object unless you know what to look for. It does significantly reduce the code needed to add these functions to the ‘wrapper’ object’s prototype, avoiding the ‘same whitespace, different values‘ problem that Neal Ford outlines in his article about harvesting idiomatic patterns in our code.

Written by Mark Needham

March 28th, 2010 at 8:02 pm

Posted in Reading Code

Tagged with

Finding the assumptions in stories

with 4 comments

My colleague J.K. has written an interesting blog post where he describes a slightly different approach that he’s been taking to writing stories to help move the business value in a story towards the beginning of the description and avoid detailing a solution in the ‘I want’ section of the story.

To summarise, J.K.’s current approach involves moving from the traditional story format of:

As I...
I want.. 
So that...

To the following:

As I... 
I want..
By...

I quite like this idea and I’ve noticed that even without using this story format technical solutions are sometimes described as the business requirement and we need to look beyond the ‘I want’ section of the story card to find the real value and locate assumptions which have led to the story being written in that way.

To give a recent example, a colleague and I picked up the following story:

As the business
I want a HTTP module to be included on the old site
So that I can redirect a small percentage of traffic to the new site

We assumed that other options for redirecting traffic must have already been analysed and written off in order for this to be the suggested solution so we initially started looking at how to implement it.

After a bit of investigation it became clear that this was going to be quite an invasive solution to the problem and would involve re-testing of the whole old site (since we would be making a change there) before it could be put into production. That would take 2 weeks.

Speaking with Toni and Ashok about the problem it became clear that it should be possible to control whether traffic was going to the old or new site by changing the configuration in our load balancer, Netscaler.

Discussing this further we found out that this had been tried previously and hadn’t quite worked out as expected which was why it hadn’t been considered as an option.

We spent some time talking through using Netscaler with the network team and agreed to try it out on a performance environment and see whether it would balance traffic in the way that we wanted which it did.

We still need to make sure that it works as expected in production but it was an interesting example of how solutions can be excluded based on prior experience even though they might still be useful to us.

I’ll certainly be more aware of noticing when a story details a solution and try and look for the actual requirement after this experience.

Written by Mark Needham

March 26th, 2010 at 1:14 am

Posted in Agile

Tagged with ,

Selenium, Firefox and HTTPS pages

without comments

A fairly common scenario that we come across when building automated test suites using Selenium is the need to get past the security exception that Firefox pops up when you try to access a self signed HTTPS page.

Luckily there is quite a cool plugin for Firefox called ‘Remember Certificate Exception‘ which automatically clicks through the exception and allows the automated tests to keep running and not get stuck on the certificate exception page.

One other thing to note is that if the first time you hit a HTTPS page is on a HTTP POST then the automated test will still get stuck because after the plugin has accepted the certificate exception it will try to refresh the page which leads to the ‘Do you want to resend the data’ pop up.

We’ve previously got around this by writing a script using AutoIt which waits for that specific pop up and then ‘presses the spacebar’ but another way is to ensure that you hit a HTTPS page with a GET request at the beginning of the build so that the certificate exception is accepted for the rest of the test run.

To use the plugin in the build we need to add it to the Firefox profile that we use to run the build.

In Windows you need to run this command (having first ensured that all instances of Firefox are closed):

firefox.exe --ProfileManager

We then need to create a profile which points to the ‘/path/to/selenium/profile’ directory that we will use when launching Selenium Server. There is a much more detailed description of how to do that on this blog post.

After that we need to launch Firefox with that profile and then add the plugin to the profile.

Having done that we need to tell Selenium Server to use that profile whenever it runs any tests which can be done like so:

java -jar selenium-server.jar -firefoxProfileTemplate /path/to/selenium/profile

Written by Mark Needham

March 25th, 2010 at 8:09 am

Posted in Testing

Tagged with ,

TDD: Consistent test structure

with 3 comments

While pairing with Damian we came across the fairly common situation where we’d written two different tests – one to handle the positive case and one the negative case.

While tidying up the tests after we’d got them passing we noticed that the test structure wasn’t exactly the same. The two tests looked a bit like this:

[Test]
public void ShouldSetSomethingIfWeHaveAFoo()
{
	var aFoo = FooBuilder.Build.WithBar("bar").WithBaz("baz").AFoo();
 
	// some random setup
	// some stubs/expectations
 
	var result = new Controller(...).Submit(aFoo);
 
	Assert.That(result.HasFoo, Is.True);
}
[Test]
public void ShouldNotSetSomethingIfWeDoNotHaveAFoo()
{
	// some random setup
	// some stubs/expectations
 
	var result = new Controller(...).Submit(null);
 
	Assert.That(result.HasFoo, Is.False);
}

There isn’t a great deal of difference between these two bits of code but the structure of the test isn’t the same because I inlined the ‘aFoo’ variable in the second test.

Damian pointed out that if we were just glancing at the tests in the future it would be much easier for us if the structure was exactly the same. This would mean that we would immediately be able to identify what the test was supposed to be doing and why.

In this contrived example we would just need to pull out the ‘null’ into a descriptive variable:

[Test]
public void ShouldNotSetSomethingIfWeDoNotHaveAFoo()
{
	var noFoo = null;
 
	// some random setup
	// some stubs/expectations
 
	var result = new Controller(...).Submit(noFoo);
 
	Assert.That(result.HasFoo, Is.False);
}

Although this is a simple example I’ve been trying to follow this guideline wherever possible and my tests now tend to have the following structure:

[Test]
public void ShouldShowTheStructureOfMarksTests()
{
	// The test data that's important for the test
 
	// Less important test data
 
	// Expectation/Stub setup
 
	// Call to object under test
 
	// Assertions
}

As a neat side effect I’ve also noticed that it seems to be easier to spot duplication that we can possibly extract with this approach as well.

Written by Mark Needham

March 24th, 2010 at 6:53 am

Posted in Testing

Tagged with ,

Defensive Programming and the UI

with 4 comments

A few weeks ago I was looking at quite an interesting bug in our system which initially didn’t seem possible.

On one of our screens we have some questions that the user fills in which read a bit like this:

  • Do you have a foo?
    • Is your foo an approved foo?
    • Is your foo special?

i.e. you would only see the 2nd and 3rd questions on the screen if you answered yes to the first question.

However, if you went to the next page on the form having answered ‘Yes’ to the first question and then came back to this page and changed your answer to the first question to ‘No’ then when the data got submitted to the server the answers to the other two questions would still be posted.

We were therefore ending up with a request with the following values submitted:

hasFoo		"No"
approvedFoo 	"Yes"
specialFoo	"Yes"

Later on in the application we had some logic which decided what to do with the user request and this one was erroneously being approved because we hadn’t catered for the condition where the first question was answered ‘No’ and the other questions had ‘Yes’ values.

In theory we should have written some client side code to reset the optional questions if the first one was answered ‘No’ but even then it’s still possible to post whatever data you want to the server if you try hard enough.

Given that we need to behave in a somewhat defensive way somewhere on the server.

My initial thinking was that perhaps we should change the logic to handle this scenario but talking through the problem with Mike he pointed out that it would make more sense to fix the data earlier in the chain.

I ended up writing some code in the controller to change the latter two fields to ‘No’ if the answer to the first question was ‘No’. We’re not really using custom binders on the project otherwise I think it would be the type of logic that would go in there.

Overall I’m no really a fan of defensive programming in general because it often seems to lead to over complicated code but in the case of user input it does make sense and the general guideline seems to be to fix any logically invalid values as close to the entry point of our application as possible.

Written by Mark Needham

March 22nd, 2010 at 11:42 pm

node.js: A little application with Twitter & CouchDB

with 5 comments

I’ve been continuing to play around with node.js and I thought it would be interesting to write a little application to poll Twitter every minute and save any new Tweets into a CouchDB database.

I first played around with CouchDB in May last year and initially spent a lot of time trying to work out how to install it before coming across CouchDBX which gives you one click installation for Mac OS X.

I’m using sixtus’ node-couch library to communicate with CouchDB and I’ve written a little bit of code that allows me to call the Twitter API.

What did I learn?

  • I’ve been reading through Brian Guthrie’s slides from his ‘Advanced Ruby Idioms So Clean You Can Eat Off Of Them‘ talk from RubyConfIndia and one of the suggestions he makes is that in Ruby there are only 6 acceptable types of signatures for functions:
    • 0 parameters
    • 1 parameter
    • 2 parameters
    • A hash
    • 1 parameter and a hash
    • A variable number of arguments passed in as an array

    It seems to me that the same guidelines would be applicable in JavaScript as well except instead of a Hash we can pass in an object with properties and values to serve the same purpose. A lot of the jQuery libraries I’ve used actually do this anyway so it’s an idiom that’s in both languages.

    I originally wrote my twitter function it so that it would take in several of the arguments individually:

    export.query = function(username, password, method) { ... }

    After reading Brian’s slides I realised that this was quickly going to become a mess so I’ve changed the signature to only take in the most important parameter (‘method’) on its own with the other parameters passed in an ‘options’ object:

    export.query = function(method, options) { ... }

    I’ve not written functions that take in parameters like this before but I really like it so far. It really helps simplify signatures while allowing you to pass in extra values if necessary.

  • I find myself porting higher order functions from C#/F# into JavaScript whenever I can’t find a function to do what I want – it’s fun writing them but I’m not sure how idiomatic the code I’m writing is.

    For example I wanted to write a function to take query parameters out of an options object and create a query string out of them. I adapted the code from node-couch and ended up with the following:

    Object.prototype.filter = function(fn) {
        var result = {};
        for (var name in this) {
            if (this.hasOwnProperty(name)) {
                if (fn.call(this[name], name, this[name])) {
                    result[name] = this[name];
                }
            }
        }
        return result;
    };
     
    Object.prototype.into = function(theArray, fn) {
        for (var name in this) {
            if (this.hasOwnProperty(name)) {
                theArray.push(fn.call(this[name], name, this[name]));
            }
        }
        return theArray;
    };
     
    function encodeOptions(options) {
        var parameters = [];
        if (typeof(options) === "object" && options !== null) {
            parameters = options
                            .filter(function(name) {
                                return !(name === "username" || name === "password" || name === "callback");})
                            .into([], function(name, value) {
                                return encodeURIComponent(name) + "=" + encodeURIComponent(value); });
        }
        return parameters.length ? ("?" + parameters.join("&")) : "";
    }

    I’m not sure how wise it is adding these functions to the object prototype – I haven’t had any problems so far but I guess if other libraries I’m using changed the prototype of these built in types in the same way as I am then I might get unexpected behaviour.

    Would the typical way to defend against this be to check if a function is defined before trying to define one and throwing an exception if so? Or is adding to the prototype just a dangerous thing to do altogether?

    Either way I’m not altogether convinced that the code with these higher order functions actually reads better than it would without them.

  • I’m finding it quite interesting that a lot of the code I write around node.js depends on callbacks which means that if you have 3 operations that depend on each other then you end up with nested callbacks which almost reads like code written in a continuation passing style.

    For example I have some code which needs to do the following:

    • Query CouchDB to get the ID of the most recently saved tweet
    • Query Twitter with that ID to get any tweets since that one
    • Save those tweets to CouchDB
    var server = http.createServer(function (req, res) {
        couchDB.view("application/sort-by-id", {
            descending : true,
            success : function(response) {
                twitter.query("friends_timeline", {
                    ...
                    since_id : response.rows[0].value.id,
                    callback : function(tweets) {
                        tweets.each(function(tweet) {
                            couchDB.saveDoc(tweet, {
                                success : function(doc) {
                                    sys.log("Document " + doc._id + " successfully saved")
                                },
                                error : function() {
                                    sys.log("Document failed to save");
                                }
                            });
                        }); 
     
                    }
                });
            },
            error : function() {
                sys.log("Failed to retrieve documents");
            }
        });
     
        ...
    });

    There’s a ‘success’ callback for calling ‘couchDB.view’ and then a ‘callback’ callback for calling ‘twitter.query’ and finally a ‘success’ callback from calling ‘couchDB.saveDoc’.

    To me it’s not that obvious what the code is doing at first glance – perhaps because I’m not that used to this style of programming – but I’m intrigued if there’s a way to write the code to make it more readable.

  • I haven’t yet worked out a good way to test drive code in a node.js module. As I understand it all the functions we define except for ones added to the ‘exports’ object are private to the module so there’s no way to test against that code directly unless you pull it out into another module.

    At the moment I’m just changing the code and then restarting the server and checking if it’s working or not. It’s probably not the most effective feedback cycle but it’s working reasonably well so far.

I’ve put the code that I’ve written so far as gists on github:

That can be run with the following command from the terminal:

node twitter-server.js

Written by Mark Needham

March 21st, 2010 at 10:13 pm

Posted in Javascript

Tagged with ,

TDD: Expressive test names

with 4 comments

Towards the end of a post I wrote just over a year ago I suggested that I wasn’t really bothered about test names anymore because I could learn what I wanted from reading the test body.

Recently, however, I’ve come across several tests that I wrote previously which were testing the wrong thing and had such generic test names that it wasn’t obvious that it was happening.

The tests in question were around code which partially clones an object but doesn’t copy some fields for various reasons.

Instead of documenting these reasons I had written tests with names like this:

[Test]
public void ShouldCloneFooCorrectly() { }
[Test]
public void ShouldSetupFooCorrectly() { }

When we realised that the code wasn’t working correctly, which didn’t happen until QA testing, these test names were really useless because they didn’t express the intent of what we were trying to test at all.

Damian and i spent some time writing more fine grained tests which described why the code was written the way it was.

We also changed the name of the test fixture to be more descriptive as well:

[TestFixture]
public class WhenCloningFooTests
{
	[Test]
	public void ShouldNotCloneIdBecauseWeWantANewOneAssignedInTheDatabase() { }
 
	[Test]
	public void ShouldNotCopyCompletedFlagBecauseWeWantTheFooCompletionJobToPickItUp() { 	
}

It seems to me that these new tests names are more useful as specifications of the system behaviour although we didn’t go as far as you can with some frameworks where you can create base classes and separate methods to describe the different parts of a test.

Despite that I think naming tests in this way can be quite useful so I’m going to try and write more of my tests like this.

Of course it’s still possible to test the wrong thing even if you are using more expressive names but I think it will make it less likely.

Written by Mark Needham

March 19th, 2010 at 6:06 pm

Posted in Testing

Tagged with

Functional C#: Continuation Passing Style

with 4 comments

Partly inspired by my colleague Alex Scordellis’ recent post about lambda passing style I spent some time trying out a continuation passing style style on some of the code in one of our controllers to see how different the code would look compared to its current top to bottom imperative style.

We had code similar to the following:

public ActionResult Submit(string id, FormCollection form)
{
	var shoppingBasket = CreateShoppingBasketFrom(id, form);
 
	if (!validator.IsValid(shoppingBasket, ModelState))
	{
	    return RedirectToAction("index", "ShoppingBasket", new { shoppingBasket.Id });
	}
	try
	{
	    shoppingBasket.User = userService.CreateAccountOrLogIn(shoppingBasket);
	}
	catch (NoAccountException)
	{
	    ModelState.AddModelError("Password", "User name/email address was incorrect - please re-enter");
	    return RedirectToAction("index", ""ShoppingBasket", new { Id = new Guid(id) });
	}
 
	UpdateShoppingBasket(shoppingBasket);
	return RedirectToAction("index", "Purchase", new { Id = shoppingBasket.Id });
}

The user selects some products that they want to buy and then just before they click to go through to the payment page we have the option for them to login if they are an existing user.

This code handles the server side validation of the shopping basket and tries to login the user. If it fails then we return to the original page with an error message. If not then we forward them to the payment page.

With a continuation passing style we move away from the imperative style of programming whereby we call a function, get a result and then do something with that result to a style where we call a function and pass it a continuation/block of code which represents the rest of the program. After it has calculated the result it should call the continuation.

If we apply that style to the above code we end up with the following:

public ActionResult Submit(string id, FormCollection form)
{
	var shoppingBasket = CreateShoppingBasketFrom(id, form);
	return IsValid(shoppingBasket, ModelState,
					() => RedirectToAction("index", "ShoppingBasket", new { shoppingBasket.Id} ),
					() => LoginUser(shoppingBasket, 
							() =>
								{
									ModelState.AddModelError("Password", "User name/email address was incorrect - please re-enter");
									return RedirectToAction("index", ""ShoppingBasket", new { Id = new Guid(id) });
								},
							user => 
								{
									shoppingBasket.User = user;
									UpdateShoppingBasket(shoppingBasket);
									return RedirectToAction("index", "Purchase", new { Id = shoppingBasket.Id });
								}));
}
 
 
private RedirectToRouteResult IsValid(ShoppingBasket shoppingBasket, ModelStateDictionary modelState, Func<RedirectToRouteResult> failureFn, Func<RedirectToRouteResult> successFn)
{
 
	return validator.IsValid(shoppingBasket, modelState) ? successFn() : failureFn();
}
 
private RedirectToRouteResult LoginUser(ShoppingBasket shoppingBasket, Func<RedirectToRouteResult> failureFn, Func<User,RedirectToRouteResult> successFn)
{
	User user = null;
	try
	{
	    user = userService.CreateAccountOrLogIn(shoppingBasket);
	}
 
	catch (NoAccountException)
	{
		return failureFn();
	}
 
	return successFn(user);
}

The common theme in this code seemed to be that there we both success and failure paths for the code to follow depending on the result of a function so I passed in both success and failure continuations.

I quite like the fact that the try/catch block is no longer in the main method and the different things that are happening in this code now seem grouped together more than they were before.

In general though the way that I read the code doesn’t seem that different.

Instead of following the flow of logic in the code from top to bottom we just need to follow it from left to right instead and since that’s not as natural the code is more complicated than it was before.

I do understand how the code works more than I did before I started playing before this but I’m not yet convinced this is a better approach to designing code in C#.

Written by Mark Needham

March 19th, 2010 at 7:48 am

Posted in .NET

Tagged with ,

Essential and accidental complexity

with 3 comments

I’ve been reading Neal Ford’s series of articles on Evolutionary architecture and emergent design and in the one about ‘Investigating architecture and design‘ he discusses Essential and accidental complexity which I’ve previously read about in Neal’s book, ‘The Productive Programmer‘.

Neal defines these terms like so:

Essential complexity is the core of the problem we have to solve, and it consists of the parts of the software that are legitimately difficult problems. Most software problems contain some complexity.

Accidental complexity is all the stuff that doesn’t necessarily relate directly to the solution, but that we have to deal with anyway.

I find it interesting to consider where the line is for when something becomes essential complexity and when it’s accidental complexity although Neal suggests that there’s a spectrum between these two definitions upon which a piece of complexity sit.

I recently read 37 signals book ‘Getting real‘ and one of the things that really stood out for me was the idea of keeping an application simple and writing less software wherever possible.

I’m intrigued as to where this would fit in with respect to essential and accidental complexity because quite often we end up implementing features which aren’t part of the application’s core value but do add some value yet become way more complicated than anyone would have originally imagined.

Quite often these won’t be the first features played in a release because they’re not considered top priority and therefore when they are played the amount of analysis done on their potential impact is often less and as we implement these features they end up pervading across the application and increasing its overall complexity.

Another thing that often happens is that we start developing these features, complicating the code base as we go, before the business eventually decides that it’s not worth the time that it’s taking.

By this time the complexity has spread across the code and it’s quite difficult to get rid of it cleanly.

What typically seems to happen is that some of it will just end up staying around in the code base pretty much unused and confusing people who come across it and don’t understand what it’s being used for.

If we were coming from the angle of Domain Driven Design then we might say that any complexity in the core domain of our application is essential and we should expect that complexity to exist.

On the other hand perhaps we should be more strict about what complexity is actually essential and try and ensure that only complexity around the absolutely vital features is allowed and look to compromise on simpler solutions for features which play a more supporting role.

If we were to follow a definition along these lines then I think we would recognise that there is actually quite a lot of accidental complexity in our applications because although some complex features do contribute to the solution it probably causes more pain than the value that it adds.

The take away for me is that we should be aware that this type of accidental complexity can creep into our applications and make sure that the business knows the trade offs we are making by implementing these types of features.

Written by Mark Needham

March 18th, 2010 at 11:21 pm