Monday, December 24, 2012

Automatic Bug Reporting

10-23-2012
Do you enjoy bug reporting?  Of course not!  Most of us do not enjoy it.  I have not met one tester who has said “I got up this morning and couldn’t wait to come to work to do bug reporting!”  To submit a good report is quite tedious.  A good bug report contains but not limited to:
  • A summary/header
  • A good description
  • Step by step recreation instructions
  • What error the steps produced
  • What were the actual results
  • Screen captures
  • Error and debugging logs
  • Setting a priority
  • Setting a critical status
  • Assigning the issue to the proper person or group
  • The area of the application/system where the issue was found.
  • The type of issue found
  • Linking to other relevant issues
And the list can go on depending on unique requirements of the company or application under test.
Now once we have reported this tediously documented report, we have to somehow manage it.  Most importantly, we have to remember to check it when it is resolved!  No matter what tools or processes a company uses, this can be a herculean effort for any one tester to do although some of us are lucky to possess good tools to make it less cumbersome.  For those of you using Notepad or Excel to track issues, well.  I am so sorry to hear that but we can always innovate methods to make our jobs easier using these tools as well!

When we are performing manual tests, this can be a less tedious endeavor than when we are running automated tests.  Automated tests means we have to often analyze the test results log to locate the errors and verify if they are potential application bugs.  For some of us, this is a simple task while others working with greater complexities this can be a very time consuming effort within itself to read the test result logs for errors, re-verify the results and still try to capture supporting information to complete a robust defect report.  One person I networked with mentioned their test results were the result of 1000s of tests running on multiple configurations of the software.  Whew that's exhausting thinking about it!

Even with awesome tools, such as Jira, to help us capture and manage the reported issue, there are ways to make it even easier for ourselves.  Below are some ideas I can share that might help you in your bug tracking efforts.
  • Click Report as Bug
    • Some of the newer, commercial test automation tools include features that allow you to select an error reported in the test log after a test run and click a button to report it as a bug either in their companion test management systems or some of the popular defect tracking systems like Jira for example.   This is a very good option when utilizing automated exploratory  testing, reporting errors from an automated test run or from a manual test run from the test management system.  However, for long and exhaustive tests this still can be a tedious endeavor and depending on how robust your Framework is, you may or may not have all of the information you require to report a defect.
  • Automatic Defect Reporting
    • With this option, you can design and develop a simple API within your TestFramework to help you automatically report issues discovered by your TestAutomation efforts.  Please allow me to share the process we used.
      • We decided to leverage the defect tracking feature of a TestManagement or ALM tool to capture and manage errors as reported by the automated TestFramework.  Therefore we invested the time to customize these tools to support our usage and leverage the tool's artifact tracibility feature.
      • We first created keyword methods within its own class or as a part of our TestManagement or ALM service layer.  We created methods such as CreateBugReport(parms), UpdateBugReport(parms), AttachScreenShots(parms), AttachFiles(parms), CloseBugReport(parms).
      • Our TestFramework supported running customized runs including by configuration, customer or test environment.  Our service layer created a test run within the TestManagement or ALM tool for the specific test run configuration initiated.
      • During the test run, each test case results were evaluated:
        • When a test failed, the TestFramework would:
          • Check to see if a bug report existed.  If it did, it would collect logs, screen shots, error messages etc to update the existing defect
          • If a bug report did not exist, the TestFramework would create the bug with the required supporting information.  It will also include the steps that caused the error (or if testing APIs or Web Services, it will include the call with the argument list).
        • If a bug report existed and the test passed, the TestFramework would automatically close it.
        • One can also customize handling for blocked and skipped test cases as well.
      • At the end of the test run(s):
        • All test results were ported to the TestManagement or ALM tool
        • One can review the test results based on the test run configuration
        • We could filter each test run to quickly only review non-passed tests and drill down to the actual failed step.
        • Each reported failure should include, steps for recreation, the error or warning messages, screen captures (if applicable) and error or debugging logs (if applicable).
        • We could leverage the tracibility features of the tool to easily navigate to the associated defect or any other related artifacts.  We could also automatically link the defect to the test case and upon reviewing the test case identify the failed steps.
        • We leveraged the captured information to verify the findings of errors.  Errors found were either application defects or it highlighted an area of the test case or TestFramework that required updating.
      • Once we verified the issue was a valid defect, we could then port the defect either manually or automatically to our main defect tracking system.  To do it automatically is a nice feature as a passed test can trigger an event to close both the TestManager's defect and the associated bug tracking defect.
Admittedly yes, the second option takes more time to implement and may require the assistance of a developer depending on the testing group's programming skill sets but in the long run the benefits are:
  • Making discovered issues immediately visible to all members of the team and management.
  • Eliminating the waste of tedious error hunting, validating and reporting the issue.
  • Investing less time and energy reporting and managing bugs
Happy Testing!

No comments:

Post a Comment