Monday, January 14, 2013

Ready, Set...GO Test Run

Can you relate to this scenario?  

I arrive to work, get coffee and attend the morning scrum meeting.  I share what I did yesterday, what I plan to do today and any impediments I may have encountered.  I sit at my desk and check to see if there is a new build ready for testing.  Of course there is.  Ok let me download the latest build, set it up, do a quick check, verify the fixed bugs and then continue verifying user stories.  Man!  I wish I had more time to do other things that would make my job easier than just repetitively testing over and over again.  #&%/(¤!!!!!  The build doesn’t work!  #¤&/(“¤ why is this not working! 
Frustrating isn’t it!?

One thing about agile I support is innovating ways to reduce or eliminate waste in the testing process that can allow us more time in our endeavor to explore our product and improve stagnated parts of the process that both limit and inhibit our ability to quantify findings or test as wide as required.   As many of us our maturing our skills and methods of test automation, we are finding ways to reach utopia of no touch test automation.   

Think about the time you gain with good and reliable automated solutions that are able to provide the validation you require to move forward with your test effort with the assurance the application meets the team’s qualitative guidelines.  So where does one begin with No Touch Testing?

The Configuration Matrix

For many of us, the test execution effort is a repetitive exercise across a configuration matrix injecting prime variables that can make a slight alteration in the product’s behavior.  Often we are executing a base set of tests across the entire configuration matrix with perhaps specific test cases targeted to a specific configuration-induced parameter for example, verifying the calculated hedge of a currency option priced at intra-day versus end of day rates to ensure the option price and volatility curve calculated accurately.  Additionally, most of today’s applications require the ability to be fully functional web- or window- based applications that are also compatible to operate within a mobile OS.

A configuration matrix can assist us in minimizing the number of actual test cases we need to create regardless of the method we elect to use for test case creation.  The additional benefit is isolating probable problems based on a configuration matrix parameter.

Often, when we have used this technique to help us with our test execution effort, we had a base configuration we used to verify the build before crawling through the matrix.  So it was perfect for creating a smoke test configuration we could use to verify a new build was ready for testing.  Below is a sample graph of a configuration matrix that can be created as a graph, XML file, comma-delimited file, database or part of the settings contained in a test management system.

We can use different techniques to develop the configuration matrix depending on the resources we have available to implement such a technique.  It can be as simple as creating separate property files for each configuration or leveraging GraphWalker to walk through the matrix and launch the test with the given randomly selected parameters.

The Clean Up

At times, it is required to uninstall the existing build and perhaps delete any residual files and folders that can contain property settings or test files that can interfere with a clean installation verification.  So you can build the capability into your Start test method to uninstall and remove the existing installation and any supporting files and folders.

Download the Build

Although it takes only a few minutes to browse to our continuous integration server such as Jenkins or Hudson and check if a build is ready for testing, those few minutes can add up especially as we are at the end of a release or sprint, we may download an updated build several times in one day.  These services provide a web client that allows us to browse, locate and download the desired build we wish to test.  In addition, they also provide a REST web service interface we can leverage to perform the following:

  • ·         Locate the desired LastSuccessfulBuild
  • ·         Download it'
To perform this feat programmatically, one can use the open method of the XML HTTP Methods to interrogate the api to locate the build, test if it is ready for download and then download it.

Tips:  Use the ResponseText property to get the contents of the api to interrogate it.  The call often will not return the results as ResponseXML.

As a bonus, you can have your TestFramework notify you that a successful build has been downloaded and is currently under verification.

Install It

Now that the latest build has been downloaded, you can elect to install it using either the installation wizard or a silent installation if one exists for your application.   Depending on the tool you are using for automation, often I found one cannot automate the installation wizard.  A tool such as AutoIT can help if your current test tool is unable to manage the window installation dialogs.  An automated installation technique can also be useful for deploying updated websites, changing their default configuration, or updating your database tables.  With some ingenuity, one can even leverage installation techniques to provision mobile devices in order to setup and configure a device for testing.   One can automate anything that is required to install and prepare an installation for testing on any OS.

Tip1:  Some build servers include features that can be leveraged to automatically install and configure an application/system under test.  If you have this available to you, leverage it!  Why invent something when it is already available to you. 

Tip2:  Set up a windows task on a suitable schedule to automatically begin the test run.  For you very clever developers, you can create a listening event that will automatically start the test run based on a newer LastSuccessfulBuild.

Update the License (if Required)

If the product requires licensing, make sure you synchronize your test date to the license file, if necessary, and either go through the application’s licensing wizard to update the license or drop a license file into the correct folder.

Smoke Test It

Now that your build has been downloaded, you can run selected tests from your test library.  I am a fan of executing two types of smoke tests.  First, one that selects approximately two to three key test case permutations that will verify every application feature.  I also include web service verification and some back end operation tests.  The second one is to automatically perform exploratory tests to all screens accessible to the user.  I use methods that allows me to get all windows and get all window controls to help with simple tests that usually flush out some embarrassing bugs such as ensuring one can enter text to a field, select an item from a listbox, click an enabled button or perform boundary testing to ensure screen size integrity.  If you can perfect this technique, no matter how they change the application under test, this automated exploratory test will always work.  What is nice is you can have the system alert you to visual changes that have been made to the application.  Now in my opinion, this doesn’t replace manual exploratory efforts; this gives you more time to perform manual test efforts and even perhaps highlight areas of concern that you can zero in on.  For mobile testing, we limited smoke testing to mobile phone simulators prior to actual testing on the device.

The above paragraph sounds like a dream to some of us.  I know.  However this type of smoke test coverage doesn’t happen overnight.  I am a fan of automated test coverage evolution.  This means, we have a starting point that meets our immediate needs.  Over time, over releases or over sprints, we can expand the automated coverage until we reach a point where we are simply adding/modifying the TestFramework to support modified or new application features.  This automated coverage is part of the overall testing strategy which supports ensuring the test suites are always up to date with the latest release.  Depending on the resources you have available to you, this can happen within a short time period or it can evolve over several months or years.    If you are one of the “lone wolf testers” in a development team or even within the company then you have to prioritize how you will expand coverage. 

I’m Ready!

Once the smoke test has concluded, the Test Framework can notify you of the results.  If you are using a commercial test tool, usually they have features that will email you the results.  If you are using an open source tool, you can try using one of Apache Commons Email to include the ability of automatic email notification.

Tip1:  See the article Using Google Charts with Selenium WebDriver for ideas on displaying the results on a large screen monitor.

Tip2:  If your TestFramework stores test results into a TestManagement system, see how you can leverage it to automatically notify you of the latest test run.

Perhaps one day I can write an article about developing your TestFramework to sms test results straight to your mobile phone.  I am sure there is a way to do that!

The Next Steps!

Now that the new build is ready for testing, well if the smoke test passed to your satisfaction, you can now continue your test effort on the application features that are targeted as ready for testing on your scrum or kanban board.  This means one can efficiently perform manual testing as well as upgrade the TestFramework to support changed or new application features.  You can also continue more exhaustive automated tests.  If the Start Test process has failed, you can very quickly notify your development team and allocate your time towards other efforts until the next build is ready for testing.


Don’t be shy (or stubborn) to leverage tools to help maximize the test efforts.  Doing so can not only help you to test but also optimize the number of resources required to do it.

A Tale of the Angry QA

After working for a company for more than three years, our QA team here in Sweden had an opportunity to meet the QA team managers in the US for the first time.  On the day they arrived they asked to speak to us one at a time and the meeting was more of an interrogation instead of a productive collaboration of teams.  Naturally I had to put a stop to that and learn why they demonstrated such anger towards us.  I was amazed to learn the origin of the attitude problem.

It seemed the managers of the US QA teams were being questioned as to why they required twenty or more people to perform the same amount of test effort my humble team of four was performing.  Their goal was to prove our test performance was ineffective and inadequate as way to justify their staff numbers.  Were we upset?  Hell yeah but there was no need to feed into the negativity as our results were undeniable.

We successfully developed a process that allowed us to plan our release, modify our test environment, if required, manually explore new and/or changed features, update our test automation framework to accommodate changes, add new tests and execute our existing library of more than 2000 tests cases (within a four hour test time frame) to perform a variety of test types.  A testament to our effort was when a customer purchased our automated testing solution as part of the company product offering, they wrote in the follow-up sales survey that it was the best test automation solution they have ever had the opportunity to use.  Once we learned this, our humble team of four celebrated as we realized we discovered a way to provide high quality work with minimal resources which helped to manage the cost of quality assurance for our office that was strapped for cash.

I would be so happy to hear some of the techniques you have used to support No Touch Testing.

Happy Automation!!


  1. you have carved a very nice picture that we tester normally do, starting from morning to evening and 5 day in a week.
    it was worth reading your well narrated story of software build for testing.

  2. Thank you Dwarika. I am glad you enjoyed it. With all the tools on the market it is important for us to leverage them to the max in order to maximize our test effort.