Monday, December 24, 2012

Test Automation Reporting

11-10-2012

This weekend, a user on LinkedIn posed the following question

What should be included in a Test Automation Report

Hence this new article!  I thought this was a very good question to ask.  Today, many companies are seeking developers to help implement their test automated solutions however, there is a difference between the type of reports developers like and the type of reports testers need.  Developers LOVE logger style reports.  Logger style reports basically trace the activity of a program and will print messages related to information, failures, warnings, and debugging.  Here is a snapshot of the type of logging they generally find useful.

  • Sat Nov 10 16:40:41 CET 2012:INFO:Started mockService [SampleServiceSoapBinding MockService] on port [8088] at path [/mockSampleServiceSoapBinding]
  • Sat Nov 10 16:40:52 CET 2012:DEBUG:Attempt 1 to execute request
  • Sat Nov 10 16:40:52 CET 2012:DEBUG:Sending request: POST /mockSampleServiceSoapBinding HTTP/1.1
  • Sat Nov 10 16:40:54 CET 2012:ERROR:An error occured [Some connections settings are missing], see error log for details
  • Sat Nov 10 16:40:55 CET 2012:DEBUG:Connection can be kept alive for 10000 MILLISECONDS

A logger style report makes sense to a developer as it will generally navigate him to where a problem exists in the code.  This definitely makes sense for unit testing or code debugging.  However, for your black box testers, you know the people who do not see inside the code, this type of test logging is usually in-effective for their purposes.  Logging style reports may point to where the code has failed in the TestFramework but it doesn’t really give a clear picture to a tester what the exact problem is.

So what are the tester’s requirements?  The tester should require:
  1.  As a tester I need the test automation report to provide comprehensive information that will allow me to efficiently and effectively analyze why the expected result was not met.
  2.  As a tester, I would like the TestFramework to provide supporting information such as screen captures, logs, actual results when a test failure, block or skip has occurred.
So what do we mean by comprehensive information?  Comprehensive information can include but not limited to:
  1.  Test steps
  2. Screen shots
  3. The AUT’s logs
  4. OS
  5. Machine name
  6. The AUT’s version and build
  7. Configuration information
  8. Memory information
  9. CPU information
OK before you start complaining, let me explain why?  There are actually two very simple reasons why this information is important:
  1. Most developers will require this information in order to reproduce the problem and resolve the reported issue.
  2. Some reported defects have specific reasons why they occur.  Having the above information helps to reveal it more efficiently than the tester guessing how it happened.
If you are part of an agile team, it most certainly will reduce the waste of effort and time analyzing the test automation reports.  Don’t you agree?

Test automation reports should reveal one of three types of problems:  a) the TestFramework requires a fix or an upgrade; b) the test case or test data requires updating; or c) an actual error in the application under test.   Therefore, we need two categories of reports:  1) reports the test automation developer can use to debug and resolve the TestFramework code; and 2) reports that can assist the tester (or other stakeholders) to more accurately analyze non-passed tests.    So let’s take a look at some reports, the purpose of the report and what content they should contain.

TestFramework Logger Report

Purpose
This report is intended for the Test Automation Developer to assist in identifying where the test automation code should be fixed, refactored or upgraded.

Content
The report can be a typical developer style logging report and can contain information such as: test start time, test end time, info, warnings, errors, debugging, machine name, and the OS.

Test Run Results Report

I was a long time user of SilkTest and I was spoiled to allowing SilkTest to do most of the work required to provide me a decent test results report.  Once I switched to an open source tool, I realized I had to invest more effort into getting a report that would prove itself usable.  Here is a snapshot of one of my earlier attempts.

TESTCASE:  App.7 - Confirm a new insurance was created for the customer
1.  Select Person & Avtal
[Action] pickMenu [Object Name] mainMenu [Data] Person & Avtal
[Verify Action] none [Verify Data] none
 
2.  Select Engagemang
[Action] pickMenu [Object Name] actions [Data] Engagemang
[Verify Action] none [Verify Data] none
 
3.  Enter customer social security id
[Action] writeText [Object Name] sokId [Data] 123-45-6789
[Verify Action] none [Verify Data] none
 
4.  Pick the social security id.
[Action] clickTree [Object Name] none [Data] 123-45-6789 Treekundinteiapp theName
[Verify Action] none [Verify Data] none
***FAILED:  Failed clicking tree node item: 123-45-6789 Treekundinteiapp theName.
5.  Pick the insurance
[Action] clickTree [Object Name] none [Data] itemTextLink1
[Verify Action] confirmTextPresent [Verify Data] Avtalspension PA-03
***FAILED:  Failed clicking tree node item: itemTextLink1.
***VERIFICATION FAILED:  'Avtalspension PA-03' not found on page.
JIRA ID:  NONE
RESULTS:  FAILED

As you can see from above, it provides me an easy way to verify the results manually as I have each step that was executed and asserted/verified, it provides me the data that was used, and it specifically indicates which steps failed and why it failed.  On each failed test step, the TestFramework would take a snapshot of the window (if testing window or web based applications) and save it with the corresponding test case id number.  The Jira ID would inform me if an issue pre-existed or not.   The results of course would report failed if any test step had not passed.  If it passed and if I had a Jira ID then I would know close the associated jira issue.

Purpose
The test run results report is intended for the tester to aid them in analyzing the results of the test run.  Typically a tester will review minimally the test run length, pass/fail results, machine, OS, test type, build, customer, sprint, release, and the status of each test executed.

Content
This should be a comprehensive report providing the tester with all the required details to effectively analyze why a test case did not pass.  This report should contain:
  • Global information, e.g.
    • Test run id
    • Test run machine
    • Test run OS
    • Test run configuration
    • Test run start time
    • Test run end time
    • Customer name
    • Release Id
    • Sprint Id
  • Test case id
  • Test case description
  • Test steps
    • Test step id
    • Test step description
    • Test step action
    • Test step action data
    • Test step expected results
    • Test step verification
    • Test step verification data
    • Test step start time
    • Test step end time
    • Test step status
    • Test step actual results
    • Test step screen shot
  • AUT test log
This may seem tedious but there is a tremendous benefit to the tester.  For one, he can more efficiently reproduce the reported problems and agile teams should love this because it saves a tremendous amount of time.  It will highlight an area in the application that may require some additional testing which can lead to adding or modifying existing test cases.  If the results of the effort prove to be an issue to report, then the tester will likely have all of the information required to make an effective defect report see article on Automatic Bug Reporting.

Test Run Alert Report

Purpose
This is an interesting report type that is intended to alert the responsible parties of a possible failure with the network, test machine, starting up the application or the basic functionality of the application.  This report can be used by, for example product owners, testers, test environment manager, or anyone that should be alerted a failure is prohibiting the test effort.

Content
Examples of what content to include are:
  • Date and time
  • Test environment name
  • Machine name and OS
  • Error
  • Responsible person
This is a great email notification report or one you can display on a large display screen.  See article on Using Google Charts with Selenium for additional ideas.

Test Environment Health Report

Purpose
This is a great report especially if you have more than one test environment and can be either emailed or displayed on a large display monitor or your shared document repository.  This is basically a chart showing the overall health of your test environment after each test run.

Content
Here the content is based on the information you desire to capture.  When I used this type of report, it was very simple and simply showed a bar chart displaying the overall pass/fall results of each test environment.  The goal was for each test environment to be all green.

Lava Lamp Indicators

Purpose
It is a visual indicator of the current test run health.  This is a nice visual aid to all project/product stakeholders but I think it is more of a nice toy for the test automation specialist :) and perhaps the tester.  Depending on how the reporting mechanisms work one can begin analyzing failures prior to the end of a test run if the red lava lamp is turned on.

Content
Red and green lava lamps are connected and configured to the test run machine.  The red one indicates the current test run is failing and the green one indicates it is passing.

Other Reports

Other reports?  Hmmm.  Well I guess I will leave that up to your imagination and unique needs.  As the old saying goes “one shoe doesn’t fit all” and that is true of our testing and test automation strategy.  Every product and every company will impose some unique demands on the process we adopt.  However hopefully, the above will give you some ideas on what type of reports you want your TestFramework to produce.

Conclusion

So there you have it ladies and gentleman.  Some ideas for the types of reports one should generate to support your testing staff.  Feel free to comment to share some ideas you may have for report types and content.

Happy Automation!

2 comments: