Thursday, March 5, 2015

Static testing

Static testing is the main form of systematic software test cycle in ApexSQL

Purpose of having a systematic software test cycle
  a) To minimize the probability of false negatives
  b) To efficiently leverage limited time and resources we have when doing software testing
  c) To have predictable release dates which directly help our sales


Chronology of a systematic software test cycle
  1) (pre-testing) Internal developer cross-testing or Blitzkrieg testing of the new official product build being prepared for a release:
    a) Blitzkrieg testing must be covered within the planned development time / within the sprint
    b) Developers can only fix internally found issues based on available time planned for internal testing and fixing
    c) When the sprint time runs out all remaining found issues must be sent to software testers to officially post as bugs

  2) (round 1) Developers freeze the code: further code changes in this branch are not allowed so make sure to branch off further fixes and development.

Officially send the build to the first static testing round by providing to software testers as many details as possible what needs to be tested, i.e.:
  Areas for testing:
    a) All resolved bugs, total of N
    b) GUI living standards affecting the following product UI: ...
    c) Regression test the following core features: ...
    d) New product functionalities / functionality changes: ...
    e) Planned or a possible change in performance of: ...
    f) Updated activation / product installer
    g) Support for new OS / SQL Server / integration with: ...
    h) Coexistence with the following products: ...
    i) ...

  3) (round 1) Software testers acknowledge the build sent by developers by providing a top-level test plan containing:
    a) Test areas covering the whole product including full regression testing and major use cases
    b) Priorities for test areas and tester assignments when multiple software testers will participate in testing
    c) ETA in test-hours for each of the test areas

 I.e.:
  Testing can start on <date> the earliest and finished by <date>. The plan is:
    1) Test all resolved bugs, tester1 and tester2, 3h total
    2) GUI living standards, tester1, 2h total
    3) Regression test for ..., tester2, 3h total
    ...

During the testing when a new release-blocking bug is found software testers will immediately report the bug number to developers to fix the bug proactively while the testing is still ongoing

When the testing is done, a standard test summary report is sent with information about remaining bugs, bug flow, and release recommendation by software testers including clarification why such recommendation is sent

  4) (round 2) Developers fix all release-blocking bugs including additional quick/simple bugs that aren't related to product functionality and have minimal chance to cause new regression issues, then create a new build with internally verified fixes, and repeat the step #2

  5) (round 2) Software testers acknowledge the new build and again provide detailed test areas as specified in the step #3

Note: this time around software testers can only plan and test changes that developers made, i.e.:
    1) All fixed bugs since the previous test round
    2) Regression test only specific functionality affected by bug fixes and mentioned by developers
    3) Spot-test the product for a very short time, several hours at most

  6) (round 3) Simply repeat steps #4 and #5 if there is a need to do so

  7) (release) If more release-blocking bugs are found suggest which ones can be fixed immediately and release after that with intra-team verified fixes by developers, and software testers as needed

  8) (post-release) All bugs that remain unfixed will need to be corrected in the subsequent releases: a patch, a quick-maint release, or a regular release


Q&A
Q1: What happens if software testers can't start testing a build for more than a week?
A1: Developers are then required to prioritize additional cross-testing themselves by providing the test plan as mentioned under #3 above and committing to transparent testing:
  a) All issues found will be backlogged to software testers to retest when they can
  b) Only approved issues from the list can be fixed after testing (never fix in real-time)
  c) Once fixed, developers will again cross-test each other's fixes and release
  d) Software testers will go over all backlogged issues post-release and officially report bugs for still unfixed issues

Q2: Can we have more than 3 static test rounds?
A2: No. We need to quickly fix only the most critical issue(s) following test round #3 and release immediately. Any remaining unfixed issues will need to be fixed in the subsequent release

Q3: But ApexSQL <insert product name> is specific and we can't test like that, we need more time, test rounds, testers, resources, etc., can we change the rules for this one product?
A3: No. If a product is specific then let's proactively work together on:
  a) Planning to do some internal testing during the development of the product, actively taking part in design decisions, checking all UI changes on-the-fly, etc.
  b) Creating more thorough test plan including shared sandboxes, use cases, and even individual test cases as needed
  c) Automating some part of the testing with help from developers, or by outsourcing a specific test application

Blitzkrieg testing

(Wikipedia) Blitzkrieg (German, "lightning war") is an anglicized term describing a method of warfare whereby an attacking force spearheaded by a dense concentration of armored and motorized or mechanized infantry formations, and heavily backed up by close air support, forces a breakthrough into the enemy's line of defense through a series of short, fast, powerful attacks

Although we're striving to automate as much development processes as we can, in some cases of complex product releases we still need to ensure products pass human / manual testing so there are no broken functionality issues we missed

What happens when we have limited number of testers covering multiple products from multiple development teams? As we never test in parallel, one or more products inevitably gets delayed, we pile up technical waste and lose agility

Consider a typical development scenario:
  1) On day 1 Dev team A works for 3 weeks and builds Product A for testing
  2) On day 2 Dev team B works for 3 weeks and builds Product B for testing
  3) The same software testers cover both Dev team A and Dev team B
  4) Software testers receive Product A and Product B for testing on week 4
  5) We prioritize products based on their ROI, date of last release, customer requests - one product still gets delayed and there is always at least one customer that we've let down

Instead of losing momentum, we can now add air support (developers) to help mechanized infantry (software testers) and blitzkrieg the product for one day

Years ago in early days of ApexSQL each developer was owning exactly one product and was also responsible to self-test the product before sending it to testers. Without this mandatory self-testing we've regressed to not doing any self-testing prior to sending products to testing

Consider different step #5 above:
  5) Dev team A works together with software testers to blitzkrieg-test Product A for one day and release it as Interim build or even production build if software testers agree
  6) Dev team B works together with software testers the next day to blitzkrieg-test Product B for one day and release it as Interim/production build

Critical note: developers cannot report bugs "internally" or fix them on-the-fly: all bugs must be reported to software testers who will then create bugs in the system and prioritize them accordingly

Also to avoid confusion - some online blogs say that Blitzkrieg testing = spot-testing; there is a major difference: spot-testing is quick testing by a lone scout, while Blitzkrieg is a short but major quality offensive

Friday, January 23, 2015

Scrum tales - part 15 - success FAQ

Why does Scrum matter?
- Scrum provides structure and forces prioritization in the company
- Scrum provides maximum work transparency at all times
- Scrum provides clear insight into problems within teams
- Scrum eliminates team leaders and administrative non-thinkers
- Scrum enables regular dialog between stakeholders and teams
- Scrum allows teams to make their own commitments and estimates within reason
- Scrum allows teams to work without constant supervision and guidance
- Scrum allows for predictable work increments
- Scrum is easily scalable

Why do successful sprints matter?
Each successful sprint carries a potential production-ready deliverable increment (a new product build, a specs or a prototype for a new tool, documentation, etc.), and as each sprint is time-boxed it is easy to make systematic progress and be competitive as a company

What happens when a sprint succeeds?
It shows that a Scrum team cares about the company success and can be trusted more in the future as the team has delivered what they agreed with the stakeholders

What happens when a sprint fails?
It shows that a Scrum team needs to improve in the future, and gives the team a chance to accept responsibility and address the organizational issues themselves

Is it ok to fail a sprint if it is done for a greater cause / higher goal?
No - saying "greater cause" in this context is an excuse for doing things your way without alignment with previously agreed goals and priorities, and with lack of transparency towards stakeholders

There are no valid reasons why a sprint should fail if:
  a) Communication with the Product Owner and all stakeholders was regular
  b) All issues with suggestions how to resolve them were raised as soon as they showed up
  c) Sufficient effort was expanded

How to ensure a sprint succeeds when unexpected issues show up?
As soon as the issues show up:
  1) Consider all possible solutions to keep the sprint on track: additional research, workaround, help from other teams, or just expand some extra effort
  2) Contact stakeholders / Product Owner to inform them of the issues and possible solutions
  3) Recommend sprint grooming as the last resort

Both the Scrum team and all stakeholders equally wish for each sprint to succeed so consider all related communication as discussion between allies on a common quest for success

Thursday, December 11, 2014

Developer Vs. Programmer

At ApexSQL we're making killer tools for SQL Server, and to do so we're proud to have strong product and website teams currently consisting of developers and SQAs. We have no programmers in our teams and we don't need any.

What is the difference between a programmer and a developer?


Definition

Programmer is a grunt receiving orders from a team leader and/or project manager working on a piece of code outsourced for someone else's solution and based on someone else's plan.

Developer is a spec-ops, lean-mean-development-machine part of a small band of heroes owning and building top-notch solutions, come hell or high water.


Examples

Programmer has many bosses: a senior, a team leader, a project manager, etc.
Developer has only one boss: the customer.

Programmer wants to write a program or some code.
Developer wants to create the best solution for the customer.

Programmer comes to work and works for 8h.
Developer has incremental results each day regardless of hours worked.

Programmer asks questions.
Developer makes suggestions.

Programmer speaks for himself as one from the team.
Developer speaks to the team and for the team.

Programmer reads roadmaps and writes code to accommodate roadmaps.
Developer makes roadmaps and works to accommodate customers.

Programmer doesn't know the competitors and tends to be surprised.
Developer knows what competitors had for breakfast and what they plan for dinner.

Programmer schedules meetings and gives updates to stakeholders when asked to do so.
Developer has an active com-link with battle buddies and keeps stakeholders constantly informed of mission status.

Programmer learns when new skills are needed for work and/or is trained by others.
Developer learns organically and shares the knowledge by training others.

Programmer works with other teams to share personal workload.
Developer works with other teams to lessen everyone's workload.

Programmer strives not to fail the sprint.
Developer strives to add more work on top of a successful sprint.

Programmer waits for SQAs to test code changes and report bugs.
Developer self-tests new code and makes it tough for SQAs to find bugs.

Programmer is unsure about performance of created tools.
Developer perf-tests regularly and knows that owned tools are top-performers in the market.

Programmer expects of SQAs to write test cases and to regression test everything.
Developer cooperates with SQAs to write unit tests, to implement detailed logging and to automate regression testing.

Programmer uses 3rd party tools to speed up work.
Developer uses our company tools to speed up work and to help improve the tools in the process.

Programmer "overcommits", "cannot reproduce", "cannot fix", "must ask someone else".
Developer gets the job done. Period.

Tuesday, September 2, 2014

Hard sh** and smart sh**

Great GSD ("Get Sh.. Done" as ApexSQL Nis office central poster says) = consistently have both short term and long term transparent results

Some teams are working using Scrum work organization through goals (answer to what?) and tasks (answer to how?), while others have monthly goals / quotas to achieve by making daily and weekly progress in smaller increments

We often forget about the goals while focusing exclusively on the tasks at hand to get them finished no matter what, forgetting about what is actually needed in the end and in what time

Now I'm not saying that focusing on daily tasks is wrong - one of my own email quotes from the early days said:
"Focused, hard work is the real key to success. Keep your eyes on the goal, and just keep taking the next step towards completing it." - John Carmack
 
We must always keep in mind both the goal and have Daily deliverables at all times

But in order to grow, besides working hard we must also be cognizant of the end goal at all times and be able to work smart to achieve the goal

GSD has two levels:
   1) Hard Sh**: daily deliverables, things we must all do our part and complete in order to make daily progress, i.e. fix stubborn bugs, test bug fixes and report new bugs, write TS articles for unfixed bugs, publish articles, communicate with stakeholders, share your progress daily, etc.
   2) Smart Sh**: make a dent in the universe by aligning all action with end goals and complete the goals with a reasonable amount of effort in a reasonable amount of time

How does Smart Sh** translate to your everyday work?
   a) Dev teams should realize that customers don't have a use for 50 bug fixes in code and would much rather prefer 20 bug fixes in a product build they can actually download - know when to cut and deliver.
   b) SQA should invest time to understand product usefulness, to learn about it and to use it as customers use it in order to test the product well, to never assume they know everything, to write about it in non-hamburger helper way, and to excel in customer support.
   c) Everyone should be investing time into ERF mentorship with new colleagues so that they can start contributing back and in turn save time to you.
   d) Everyone should automate repetitive work - currently a huge soft spot in multiple teams.

What does automation have to do with working smart?
A lot: one of the funnier historical examples is several devs going rogue and writing a software that fully automates sending Daily Scrum Summaries - while other teams were taking hours of time weekly to manually compile good Scrum Summary emails, these devs took one day and created a two-click solution that in turn saved them days of time; I always received spotless Scrum Summaries from their teams which was a mystery to me until I found out about the "plot" ;) (devs are good guys, they were planning to distribute the software to everyone when we stopped sending these emails daily)
 
Time is a limited commodity so by investing some in order to automate repetitive work (smart vs. hard), devs gained more time to focus on what really matters: to write better code and to deliver great products to the customers in order to make a dent in the universe. I'm also sure devs had more fun in the process ;)

Ultimately whatever we do we must ask ourselves:
   A) Can I complete the task at hand more efficiently with less effort and time?
   B) What goal will be closer to completion when I finish this task today?
   C) What will I deliver to customers when the goal is completed?
   D) Will the customers pay me for the deliverable?
 
If no one will pay you in the end, why would you do it in the first place? ;)

Tuesday, May 27, 2014

ERF - Expectations, Results, Feedback

The first badge for leading and sharing is awarded for successfully leading your team to achieve defined goals, and for maintaining transparency of team progress while sharing new experience and knowledge with peers and seniors

However the second badge for leading and sharing is more challenging and requires sharing to new colleagues how they can share-and-lead

Here is a proven good way to earn the second badge through ERF (set Expectations, review Results, provide Feedback):



When in doubt, just follow the three ERF steps:
  1) Set Expectations - clearly explain what is the final goal of the deliverable, why do we need it, what is the epic story behind it; provide a system description or example of similar work from before; go back to your own goals and don't interfere
  2) Review Results - once the results are in, dedicate time to review them in details, what is good, what is not so good, and make notes for everything; don't fix the results yourself
  3) Provide Feedback - take all your notes from the previous step, dedicate more time to make them as clear as possible before sending; finally send all the feedback and repeat expectations - go back to step #1 and repeat the full cycle until finally results match expectations


Also there are multiple ways through which you will never achieve the second badge, i.e.:



Don'ts to watch out for:
  1) Don't expect results without first setting expectations and clarifying them
  2) Don't yell when results are not matching expectations - instead just provide actionable feedback and repeat expectations
  3) Don't set deadlines based on how much time you would need to complete the same goal; however make sure to always have clear ETAs and to follow up on time as needed
  4) Don't fix low quality results yourself - instead ask for resubmission after providing feedback
  5) Don't confront when you identify a problem - instead ask for clarifications and observe
  6) Don't IM or voice-only your feedback - instead share it so that it can be traced back easily on G+, emails using #hash[tag]


How will you know if you have succeeded in mentoring new colleagues? They'll be the ones to achieve defined team goals and to share new experience and knowledge with you

Wednesday, December 11, 2013

Actionable inter-team deliverable approvals

Every Scrum team has either asked other teams for help with deliverables review or has provided assistance by reviewing deliverables of other teams
This inter-team collaboration encouraged as it will in turn improve inter-team communication, speed of deliverable reviews and most importantly quality of deliverables

However before Product Owners can accept a deliverable as approved, we'll need to see that the team performing the review has actually invested time to provide constructive and actionable feedback, or we'll automatically assume that deliverable isn't to expected acceptance criteria

What does this mean in practice:

Scrum team being reviewed - help reviewers to help you:
   a) Explain to the review team the intent behind the deliverable
   b) Specify areas that must be reviewed in a structured way
   c) Note what kind of feedback you need in order to have the deliverable approved

Team performing the review - provide full information in your reply:
   a) What has been reviewed
   b) What should be modified/updated/clarified, or
   c) Why nothing should be modified/updated/clarified (less likely)
   d) Recommend whether the reviewed deliverable should be approved or not by PO
Why should a team spend time to review other team's deliverables if they are not direct stakeholders? Our Company values statement says that Everyone serves: "Treat everyone as a customer, internal or external, and make responsiveness part of your personal brand. Build effective working relationships with a focus on being positive, informative, actionable, and helpful."
 
Leave-me-alone type of replies such as "Approved", "We have nothing to add", "This is good", and similar won't count as reviews at all when doing Sprint review