Thursday, March 5, 2015

Static testing

Static testing is the main form of systematic software test cycle in ApexSQL

Purpose of having a systematic software test cycle
  a) To minimize the probability of false negatives
  b) To efficiently leverage limited time and resources we have when doing software testing
  c) To have predictable release dates which directly help our sales


Chronology of a systematic software test cycle
  1) (pre-testing) Internal developer cross-testing or Blitzkrieg testing of the new official product build being prepared for a release:
    a) Blitzkrieg testing must be covered within the planned development time / within the sprint
    b) Developers can only fix internally found issues based on available time planned for internal testing and fixing
    c) When the sprint time runs out all remaining found issues must be sent to software testers to officially post as bugs

  2) (round 1) Developers freeze the code: further code changes in this branch are not allowed so make sure to branch off further fixes and development.

Officially send the build to the first static testing round by providing to software testers as many details as possible what needs to be tested, i.e.:
  Areas for testing:
    a) All resolved bugs, total of N
    b) GUI living standards affecting the following product UI: ...
    c) Regression test the following core features: ...
    d) New product functionalities / functionality changes: ...
    e) Planned or a possible change in performance of: ...
    f) Updated activation / product installer
    g) Support for new OS / SQL Server / integration with: ...
    h) Coexistence with the following products: ...
    i) ...

  3) (round 1) Software testers acknowledge the build sent by developers by providing a top-level test plan containing:
    a) Test areas covering the whole product including full regression testing and major use cases
    b) Priorities for test areas and tester assignments when multiple software testers will participate in testing
    c) ETA in test-hours for each of the test areas

 I.e.:
  Testing can start on <date> the earliest and finished by <date>. The plan is:
    1) Test all resolved bugs, tester1 and tester2, 3h total
    2) GUI living standards, tester1, 2h total
    3) Regression test for ..., tester2, 3h total
    ...

During the testing when a new release-blocking bug is found software testers will immediately report the bug number to developers to fix the bug proactively while the testing is still ongoing

When the testing is done, a standard test summary report is sent with information about remaining bugs, bug flow, and release recommendation by software testers including clarification why such recommendation is sent

  4) (round 2) Developers fix all release-blocking bugs including additional quick/simple bugs that aren't related to product functionality and have minimal chance to cause new regression issues, then create a new build with internally verified fixes, and repeat the step #2

  5) (round 2) Software testers acknowledge the new build and again provide detailed test areas as specified in the step #3

Note: this time around software testers can only plan and test changes that developers made, i.e.:
    1) All fixed bugs since the previous test round
    2) Regression test only specific functionality affected by bug fixes and mentioned by developers
    3) Spot-test the product for a very short time, several hours at most

  6) (round 3) Simply repeat steps #4 and #5 if there is a need to do so

  7) (release) If more release-blocking bugs are found suggest which ones can be fixed immediately and release after that with intra-team verified fixes by developers, and software testers as needed

  8) (post-release) All bugs that remain unfixed will need to be corrected in the subsequent releases: a patch, a quick-maint release, or a regular release


Q&A
Q1: What happens if software testers can't start testing a build for more than a week?
A1: Developers are then required to prioritize additional cross-testing themselves by providing the test plan as mentioned under #3 above and committing to transparent testing:
  a) All issues found will be backlogged to software testers to retest when they can
  b) Only approved issues from the list can be fixed after testing (never fix in real-time)
  c) Once fixed, developers will again cross-test each other's fixes and release
  d) Software testers will go over all backlogged issues post-release and officially report bugs for still unfixed issues

Q2: Can we have more than 3 static test rounds?
A2: No. We need to quickly fix only the most critical issue(s) following test round #3 and release immediately. Any remaining unfixed issues will need to be fixed in the subsequent release

Q3: But ApexSQL <insert product name> is specific and we can't test like that, we need more time, test rounds, testers, resources, etc., can we change the rules for this one product?
A3: No. If a product is specific then let's proactively work together on:
  a) Planning to do some internal testing during the development of the product, actively taking part in design decisions, checking all UI changes on-the-fly, etc.
  b) Creating more thorough test plan including shared sandboxes, use cases, and even individual test cases as needed
  c) Automating some part of the testing with help from developers, or by outsourcing a specific test application

No comments:

Post a Comment