A fairly common scenario that we come across when building automated test suites using Selenium is the need to get past the security exception that Firefox pops up when you try to access a self signed HTTPS page.
Luckily there is quite a cool plugin for Firefox called ‘Remember Certificate Exception‘ which automatically clicks through the exception and allows the automated tests to keep running and not get stuck on the certificate exception page.
One other thing to note is that if the first time you hit a HTTPS page is on a HTTP POST then the automated test will still get stuck because after the plugin has accepted the certificate exception it will try to refresh the page which leads to the ‘Do you want to resend the data’ pop up.
We’ve previously got around this by writing a script using AutoIt which waits for that specific pop up and then ‘presses the spacebar’ but another way is to ensure that you hit a HTTPS page with a GET request at the beginning of the build so that the certificate exception is accepted for the rest of the test run.
To use the plugin in the build we need to add it to the Firefox profile that we use to run the build.
In Windows you need to run this command (having first ensured that all instances of Firefox are closed):
We then need to create a profile which points to the ‘/path/to/selenium/profile’ directory that we will use when launching Selenium Server. There is a much more detailed description of how to do that on this blog post.
After that we need to launch Firefox with that profile and then add the plugin to the profile.
Having done that we need to tell Selenium Server to use that profile whenever it runs any tests which can be done like so:
java -jar selenium-server.jar -firefoxProfileTemplate /path/to/selenium/profile
One of the things that we’ve been working on lately to improve the overall time that our full build takes to run is to split the acceptance tests into several small groups of tests so that we can run them in parallel.
We are using Cruise as our build server so the ability to have multiple agents running against different parts of the build at the same time comes built it.
We then had to work out what the best environment to run more agents would be.
I guess the traditional approach to the problem would be to get some extra machines to use to run the build but this wasn’t possible so our initial solution was to use our developer virtual machines as build agents.
This worked in principle but the environment became way too slow to work on when the build was running and the selenium windows that popped up when each test was running became amazingly annoying. We could only really use this approach to set up agents on developer machines that weren’t being used that day.
We therefore came across the idea of having 2 virtual machines running on each developer machine, splitting the resources of the machine across the two virtual machines.
From experimenting it seems like the agent used to run a build with selenium tests in it needs to have around 1.2 GB of RAM assigned to it to run at a reasonable rate. Our developer machines have a total of 3 GB of RAM so the majority of the rest is being used by the development virtual machine.
Our development virtual machine environments run marginally slower when there is a build running on that machine but we’ve reduced that problem by ensuring we have the build running across as many developer machines as possible so that it runs less frequently on each machine
I think it serves as a worthwhile trade off for helping to reduce our total build time significantly – running sequentially it would take over an hour to run all our tests but in parallel we’ve managed to get a 10-15 minute turn around time at worst.
While trying to see if we could make that any faster we toyed with the idea of having two build virtual machines on the same box but that didn’t seem to provide the improvement that we expected due to I/O restraints – our build has quite a bit of writing and reading from disc and having two builds doing this at the same time seemed to slow down both of them.
Overall though it’s working out pretty well for us in terms of providing us a quick feedback mechanism of whether or not our system is working correctly end to end.
I’ve not used Selenium much in my time – all of my previous projects have been client side applications or service layers – but I’ve spent a bit of time getting acquainted with it this week.
While activating some acceptance tests this week I noticed quite a strange error happening if the tests ran in a certain order:
com.thoughtworks.selenium.SeleniumException: ERROR: Current window or frame is closed!
I tried commenting out some of the tests and then running the other ones and everything worked fine but when I ran all of them the problem returned.
Eventually after some eagle eyed debugging by a colleague of mine we realised that the only difference was the lack of the following line in the failing test:
A quick look at the documentation explains precisely why this works:
if windowID is null, (or the string “null”) then it is assumed the user is referring to the original window instantiated by the browser).
What had in fact happened was that the previous test had launched a pop up window and when it closed that window the focus wouldn’t return to the original window launched when Selenium first started unless we executed the above method call.