Monday, November 30, 2015

Release process and test phases

Release process covers 3 distinct consecutive test phases for which product teams (devs and SSEs) are in charge:
  1) Design and usability testing
  2) Continuous testing
  3) Static/release testing

Each team must have a clear test plan that will allow for systematic advancement between subsequent test phases culminating in a product release

In a good product release scenario a team should never regress from one test phase to a previous one - some guidelines follow to explain each test phase


1. Design and usability testing
During this phase the team is testing new features, changes in a product, and is working towards locking down the product interaction design

Usability testing helps make sure the interaction design is optimal and customer friendly - the team should leverage feedback from as many external resources as possible: company stakeholders, power-users, other product teams, etc.

It is ok to update interaction design, make functional UI changes during this phase, but not after this phase is locked down


2. Continuous testing
Once interaction design is locked down, the team must focus on finding as many bugs as possible and fixing them all as soon as they are found and reported

Usually this phase will help to clean up product functional bugs, regression bugs, and make sure the build is stable enough to go on to the next and final phase of the release process (for more details about Continuous testing vs. static testing read this article)

At this time we should have no usability or interaction design bugs reported, however if it happens that such a bug is found then the team must stop continuous testing and immediately regress back into #1 Design and usability testing phase


3. Static/release testing
At this time the team cannot make multiple product builds meaning the code must be fully locked down, and that only one build can be tested

If the previous two testing phases were covered well then the static testing will only reveal limited number of functional bugs

For static testing round #2 the team should fix only the most critical bugs:
  a) Regression/broken bugs
  b) Bugs that will clearly make the new build considered worse than the public one

If done correctly there should be no static testing round #3, but if it does happen there can be no round #4 so product release is mandatory after that

If it happens that there are design bugs or multiple "critical" bugs found during static testing then the team must stop static testing and immediately regress back into corresponding previous test phase


At any time when a process regression happens this only means that the previous testing phases weren't done well and the team must update/improve owned testing plan in order to prevent process regression from happening in the future

Remember that each team gives its own product release recommendation at the end - good litmus test question for a release: "Is this product build better than the previous one?"

The only thing that a team must follow at all times is this clear release process and must be able to provide feedback to all stakeholders about:
  a) What is the current product testing phase
  b) What is the ETA to go into the next phase / release

Thursday, October 15, 2015

Scrum PPPPPPP

When planning work ahead in your sprints avoid guesstimating at all costs

Why? By guessing estimates for weeks of work ahead you're putting yourself and your team in a corner: when unplanned things show up (and they always will) there is no turning back as you've already committed to delivering per your sprint

Optimists are guesstimating for less work than there actually is which causes the need for Overtime, low quality deliverables, messed up roadmap schedule, and feeling of failure at the end

Pessimists are guesstimating for unrealistic and unjustified work which is even worse than being an optimist - this raises a red flag that a team isn't capable of doing good work in realistic time

Unfortunately even when a team is capable of finishing work in good time, by being a pessimist from the start the original estimates become self-fulfilling prophecy and the work expands to fill out all of the allocated time

Realists on the other hand are different from optimists and pessimists simply in being good JIT planners and proactive communicators

How to recognize an optimist or a pessimist?
  a) Sprint contains many part1..part_n type of tasks in the sprint
  b) Multiple different tasks have identical estimates of n hours (usually equal to max. daily deliverable limit)
  c) There are no internal testing or peer-review tasks, only work specific tasks

How to become a realist?
  1) As a ScrumMaster dedicate a few hours time to meet with your team when planning a new sprint and take input from all team members when estimating task hours.
  One-man 15min sprint planning is a recipe for failure

  2) When encountering goals that require more detailed research before the work can be broken down into clear daily deliverables, plan for a spike-goal of a few hours that will result in more detailed planning of the work ahead.
  This is JIT planning by utilizing Spike goals and it helps to avoid doing complex planning for weeks ahead

  3) Unknowns still show up even to the best of planners - when this happens figure out what happened, try to devise the best plan for solving the new situation, and if you see the original estimates are no longer viable make sure to immediately inform all stakeholders

Thursday, March 5, 2015

Static testing

Static testing is the main form of systematic software test cycle in ApexSQL

Purpose of having a systematic software test cycle
  a) To minimize the probability of false negatives
  b) To efficiently leverage limited time and resources we have when doing software testing
  c) To have predictable release dates which directly help our sales


Chronology of a systematic software test cycle
  1) (pre-testing) Internal developer cross-testing or Blitzkrieg testing of the new official product build being prepared for a release:
    a) Blitzkrieg testing must be covered within the planned development time / within the sprint
    b) Developers can only fix internally found issues based on available time planned for internal testing and fixing
    c) When the sprint time runs out all remaining found issues must be sent to software testers to officially post as bugs

  2) (round 1) Developers freeze the code: further code changes in this branch are not allowed so make sure to branch off further fixes and development.

Officially send the build to the first static testing round by providing to software testers as many details as possible what needs to be tested, i.e.:
  Areas for testing:
    a) All resolved bugs, total of N
    b) GUI living standards affecting the following product UI: ...
    c) Regression test the following core features: ...
    d) New product functionalities / functionality changes: ...
    e) Planned or a possible change in performance of: ...
    f) Updated activation / product installer
    g) Support for new OS / SQL Server / integration with: ...
    h) Coexistence with the following products: ...
    i) ...

  3) (round 1) Software testers acknowledge the build sent by developers by providing a top-level test plan containing:
    a) Test areas covering the whole product including full regression testing and major use cases
    b) Priorities for test areas and tester assignments when multiple software testers will participate in testing
    c) ETA in test-hours for each of the test areas

 I.e.:
  Testing can start on <date> the earliest and finished by <date>. The plan is:
    1) Test all resolved bugs, tester1 and tester2, 3h total
    2) GUI living standards, tester1, 2h total
    3) Regression test for ..., tester2, 3h total
    ...

During the testing when a new release-blocking bug is found software testers will immediately report the bug number to developers to fix the bug proactively while the testing is still ongoing

When the testing is done, a standard test summary report is sent with information about remaining bugs, bug flow, and release recommendation by software testers including clarification why such recommendation is sent

  4) (round 2) Developers fix all release-blocking bugs including additional quick/simple bugs that aren't related to product functionality and have minimal chance to cause new regression issues, then create a new build with internally verified fixes, and repeat the step #2

  5) (round 2) Software testers acknowledge the new build and again provide detailed test areas as specified in the step #3

Note: this time around software testers can only plan and test changes that developers made, i.e.:
    1) All fixed bugs since the previous test round
    2) Regression test only specific functionality affected by bug fixes and mentioned by developers
    3) Spot-test the product for a very short time, several hours at most

  6) (round 3) Simply repeat steps #4 and #5 if there is a need to do so

  7) (release) If more release-blocking bugs are found suggest which ones can be fixed immediately and release after that with intra-team verified fixes by developers, and software testers as needed

  8) (post-release) All bugs that remain unfixed will need to be corrected in the subsequent releases: a patch, a quick-maint release, or a regular release


Q&A
Q1: What happens if software testers can't start testing a build for more than a week?
A1: Developers are then required to prioritize additional cross-testing themselves by providing the test plan as mentioned under #3 above and committing to transparent testing:
  a) All issues found will be backlogged to software testers to retest when they can
  b) Only approved issues from the list can be fixed after testing (never fix in real-time)
  c) Once fixed, developers will again cross-test each other's fixes and release
  d) Software testers will go over all backlogged issues post-release and officially report bugs for still unfixed issues

Q2: Can we have more than 3 static test rounds?
A2: No. We need to quickly fix only the most critical issue(s) following test round #3 and release immediately. Any remaining unfixed issues will need to be fixed in the subsequent release

Q3: But ApexSQL <insert product name> is specific and we can't test like that, we need more time, test rounds, testers, resources, etc., can we change the rules for this one product?
A3: No. If a product is specific then let's proactively work together on:
  a) Planning to do some internal testing during the development of the product, actively taking part in design decisions, checking all UI changes on-the-fly, etc.
  b) Creating more thorough test plan including shared sandboxes, use cases, and even individual test cases as needed
  c) Automating some part of the testing with help from developers, or by outsourcing a specific test application

Blitzkrieg testing

(Wikipedia) Blitzkrieg (German, "lightning war") is an anglicized term describing a method of warfare whereby an attacking force spearheaded by a dense concentration of armored and motorized or mechanized infantry formations, and heavily backed up by close air support, forces a breakthrough into the enemy's line of defense through a series of short, fast, powerful attacks

Although we're striving to automate as much development processes as we can, in some cases of complex product releases we still need to ensure products pass human / manual testing so there are no broken functionality issues we missed

What happens when we have limited number of testers covering multiple products from multiple development teams? As we never test in parallel, one or more products inevitably gets delayed, we pile up technical waste and lose agility

Consider a typical development scenario:
  1) On day 1 Dev team A works for 3 weeks and builds Product A for testing
  2) On day 2 Dev team B works for 3 weeks and builds Product B for testing
  3) The same software testers cover both Dev team A and Dev team B
  4) Software testers receive Product A and Product B for testing on week 4
  5) We prioritize products based on their ROI, date of last release, customer requests - one product still gets delayed and there is always at least one customer that we've let down

Instead of losing momentum, we can now add air support (developers) to help mechanized infantry (software testers) and blitzkrieg the product for one day

Years ago in early days of ApexSQL each developer was owning exactly one product and was also responsible to self-test the product before sending it to testers. Without this mandatory self-testing we've regressed to not doing any self-testing prior to sending products to testing

Consider different step #5 above:
  5) Dev team A works together with software testers to blitzkrieg-test Product A for one day and release it as Interim build or even production build if software testers agree
  6) Dev team B works together with software testers the next day to blitzkrieg-test Product B for one day and release it as Interim/production build

Critical note: developers cannot report bugs "internally" or fix them on-the-fly: all bugs must be reported to software testers who will then create bugs in the system and prioritize them accordingly

Also to avoid confusion - some online blogs say that Blitzkrieg testing = spot-testing; there is a major difference: spot-testing is quick testing by a lone scout, while Blitzkrieg is a short but major quality offensive

Friday, January 23, 2015

Scrum tales - part 15 - success FAQ

Why does Scrum matter?
- Scrum provides structure and forces prioritization in the company
- Scrum provides maximum work transparency at all times
- Scrum provides clear insight into problems within teams
- Scrum eliminates team leaders and administrative non-thinkers
- Scrum enables regular dialog between stakeholders and teams
- Scrum allows teams to make their own commitments and estimates within reason
- Scrum allows teams to work without constant supervision and guidance
- Scrum allows for predictable work increments
- Scrum is easily scalable

Why do successful sprints matter?
Each successful sprint carries a potential production-ready deliverable increment (a new product build, a specs or a prototype for a new tool, documentation, etc.), and as each sprint is time-boxed it is easy to make systematic progress and be competitive as a company

What happens when a sprint succeeds?
It shows that a Scrum team cares about the company success and can be trusted more in the future as the team has delivered what they agreed with the stakeholders

What happens when a sprint fails?
It shows that a Scrum team needs to improve in the future, and gives the team a chance to accept responsibility and address the organizational issues themselves

Is it ok to fail a sprint if it is done for a greater cause / higher goal?
No - saying "greater cause" in this context is an excuse for doing things your way without alignment with previously agreed goals and priorities, and with lack of transparency towards stakeholders

There are no valid reasons why a sprint should fail if:
  a) Communication with the Product Owner and all stakeholders was regular
  b) All issues with suggestions how to resolve them were raised as soon as they showed up
  c) Sufficient effort was expanded

How to ensure a sprint succeeds when unexpected issues show up?
As soon as the issues show up:
  1) Consider all possible solutions to keep the sprint on track: additional research, workaround, help from other teams, or just expand some extra effort
  2) Contact stakeholders / Product Owner to inform them of the issues and possible solutions
  3) Recommend sprint grooming as the last resort

Both the Scrum team and all stakeholders equally wish for each sprint to succeed so consider all related communication as discussion between allies on a common quest for success