Thursday, March 5, 2015

Static testing

Static testing is the main form of systematic software test cycle in ApexSQL

Purpose of having a systematic software test cycle
  a) To minimize the probability of false negatives
  b) To efficiently leverage limited time and resources we have when doing software testing
  c) To have predictable release dates which directly help our sales


Chronology of a systematic software test cycle
  1) (pre-testing) Internal developer cross-testing or Blitzkrieg testing of the new official product build being prepared for a release:
    a) Blitzkrieg testing must be covered within the planned development time / within the sprint
    b) Developers can only fix internally found issues based on available time planned for internal testing and fixing
    c) When the sprint time runs out all remaining found issues must be sent to software testers to officially post as bugs

  2) (round 1) Developers freeze the code: further code changes in this branch are not allowed so make sure to branch off further fixes and development.

Officially send the build to the first static testing round by providing to software testers as many details as possible what needs to be tested, i.e.:
  Areas for testing:
    a) All resolved bugs, total of N
    b) GUI living standards affecting the following product UI: ...
    c) Regression test the following core features: ...
    d) New product functionalities / functionality changes: ...
    e) Planned or a possible change in performance of: ...
    f) Updated activation / product installer
    g) Support for new OS / SQL Server / integration with: ...
    h) Coexistence with the following products: ...
    i) ...

  3) (round 1) Software testers acknowledge the build sent by developers by providing a top-level test plan containing:
    a) Test areas covering the whole product including full regression testing and major use cases
    b) Priorities for test areas and tester assignments when multiple software testers will participate in testing
    c) ETA in test-hours for each of the test areas

 I.e.:
  Testing can start on <date> the earliest and finished by <date>. The plan is:
    1) Test all resolved bugs, tester1 and tester2, 3h total
    2) GUI living standards, tester1, 2h total
    3) Regression test for ..., tester2, 3h total
    ...

During the testing when a new release-blocking bug is found software testers will immediately report the bug number to developers to fix the bug proactively while the testing is still ongoing

When the testing is done, a standard test summary report is sent with information about remaining bugs, bug flow, and release recommendation by software testers including clarification why such recommendation is sent

  4) (round 2) Developers fix all release-blocking bugs including additional quick/simple bugs that aren't related to product functionality and have minimal chance to cause new regression issues, then create a new build with internally verified fixes, and repeat the step #2

  5) (round 2) Software testers acknowledge the new build and again provide detailed test areas as specified in the step #3

Note: this time around software testers can only plan and test changes that developers made, i.e.:
    1) All fixed bugs since the previous test round
    2) Regression test only specific functionality affected by bug fixes and mentioned by developers
    3) Spot-test the product for a very short time, several hours at most

  6) (round 3) Simply repeat steps #4 and #5 if there is a need to do so

  7) (release) If more release-blocking bugs are found suggest which ones can be fixed immediately and release after that with intra-team verified fixes by developers, and software testers as needed

  8) (post-release) All bugs that remain unfixed will need to be corrected in the subsequent releases: a patch, a quick-maint release, or a regular release


Q&A
Q1: What happens if software testers can't start testing a build for more than a week?
A1: Developers are then required to prioritize additional cross-testing themselves by providing the test plan as mentioned under #3 above and committing to transparent testing:
  a) All issues found will be backlogged to software testers to retest when they can
  b) Only approved issues from the list can be fixed after testing (never fix in real-time)
  c) Once fixed, developers will again cross-test each other's fixes and release
  d) Software testers will go over all backlogged issues post-release and officially report bugs for still unfixed issues

Q2: Can we have more than 3 static test rounds?
A2: No. We need to quickly fix only the most critical issue(s) following test round #3 and release immediately. Any remaining unfixed issues will need to be fixed in the subsequent release

Q3: But ApexSQL <insert product name> is specific and we can't test like that, we need more time, test rounds, testers, resources, etc., can we change the rules for this one product?
A3: No. If a product is specific then let's proactively work together on:
  a) Planning to do some internal testing during the development of the product, actively taking part in design decisions, checking all UI changes on-the-fly, etc.
  b) Creating more thorough test plan including shared sandboxes, use cases, and even individual test cases as needed
  c) Automating some part of the testing with help from developers, or by outsourcing a specific test application

Blitzkrieg testing

(Wikipedia) Blitzkrieg (German, "lightning war") is an anglicized term describing a method of warfare whereby an attacking force spearheaded by a dense concentration of armored and motorized or mechanized infantry formations, and heavily backed up by close air support, forces a breakthrough into the enemy's line of defense through a series of short, fast, powerful attacks

Although we're striving to automate as much development processes as we can, in some cases of complex product releases we still need to ensure products pass human / manual testing so there are no broken functionality issues we missed

What happens when we have limited number of testers covering multiple products from multiple development teams? As we never test in parallel, one or more products inevitably gets delayed, we pile up technical waste and lose agility

Consider a typical development scenario:
  1) On day 1 Dev team A works for 3 weeks and builds Product A for testing
  2) On day 2 Dev team B works for 3 weeks and builds Product B for testing
  3) The same software testers cover both Dev team A and Dev team B
  4) Software testers receive Product A and Product B for testing on week 4
  5) We prioritize products based on their ROI, date of last release, customer requests - one product still gets delayed and there is always at least one customer that we've let down

Instead of losing momentum, we can now add air support (developers) to help mechanized infantry (software testers) and blitzkrieg the product for one day

Years ago in early days of ApexSQL each developer was owning exactly one product and was also responsible to self-test the product before sending it to testers. Without this mandatory self-testing we've regressed to not doing any self-testing prior to sending products to testing

Consider different step #5 above:
  5) Dev team A works together with software testers to blitzkrieg-test Product A for one day and release it as Interim build or even production build if software testers agree
  6) Dev team B works together with software testers the next day to blitzkrieg-test Product B for one day and release it as Interim/production build

Critical note: developers cannot report bugs "internally" or fix them on-the-fly: all bugs must be reported to software testers who will then create bugs in the system and prioritize them accordingly

Also to avoid confusion - some online blogs say that Blitzkrieg testing = spot-testing; there is a major difference: spot-testing is quick testing by a lone scout, while Blitzkrieg is a short but major quality offensive