Tuesday, October 30, 2012

Scrum tales - Part 8 - Sprint retrospective

Although Scrum teams followed guidelines and conducted all necessary meetings at the end of a Sprint, it seems that no one fully grasped the idea behind the Sprint retrospective meeting and its core purpose

What is a Sprint retrospective meeting?
Sprint retrospective meeting is a Scrum team meeting initiated and held by the ScrumMaster at the end of each Sprint. In our strict Scrum implementation, three main questions are asked by the ScrumMaster and answered in cooperation with the entire team:
   1) What worked?
   2) What didn't work?
   3) What will we do differently?

These questions are similar to those asked during the Daily scrum meeting, why do we repeat them at the end of the Sprint?
Questions may look similar but they are referring to completely different aspect of the Sprint - its iterative process and internal team organization
During the Daily scrum meeting the team is focusing on individual goals (PBIs) and tasks in the specific Sprint; during the Sprint retrospective meeting the team should be focused on the process that led to PBIs being successful or not and how to make all future PBIs always successful going forward

If we cannot specify concrete PBIs when answering Sprint retrospective meeting questions, what should we talk about?
   1) What worked?
   Discuss and list all the changes in the internal team organization, Sprint planning and ScrumMaster activities that were different compared to the team's previous Sprint; i.e.:
      a) We dedicated more time to make closer task estimates during the Sprint planning meeting
      b) This time we split PBIs for research and planning, and PBIs for development tasks
      c) We identified only the most critical ad-hoc tasks and prioritized them over the Sprint

   2) What didn't work?
   List all the obstacles identified from the moment you started planning the Sprint to the final Daily scrum meeting, especially issues that caused some Sprint deliverables (PBIs) to not be completed on time; i.e.:
      a) We had high risk PBIs that were underestimated at first but we didn't prioritize them in the Sprint
      b) ScrumMaster wasn't persistent enough to unblock externally blocked goals
      c) Our task estimates didn't cover for unexpected team member absences

   3) What will we do differently?
   This is the most important question that you must always discuss and answer even if your previous Sprint has been 100% successful as there is always something you can improve in the process; i.e.:
      a) We will segment tasks better so that more team members can work on the same PBI in parallel
      b) ScrumMaster will review all team blocked tasks daily and act to resolve them even if it means to send a single email follow up every day
      c) We will work with Product owner to define PBIs that are achievable during one Sprint duration vs. moving them across multiple sprints

Is Sprint retrospective meeting the same as Sprint review meeting and can we merge them?
No, they are different meetings. Sprint review is designed for the Scrum team to demo the Sprint deliverables to the Product owner while Sprint retrospective is primarily team-only meeting aimed to adapt and improve the sprinting process within the team and to make all future Sprints 100% successful

Thursday, October 11, 2012

Patching zen

We must be able to find a perfect balance between administration and efficiency in order to achieve the goal - deliver customer needed fixes as soon as possible. Let's address specific use cases that were all encountered in under 14 days' time

Support
   1) Each week Support check status of customer requested bug fixes, gets patch ETAs from developer teams - this is good and will result in good hard info that will be part of developer team SMART goals. However not all patch and bug fix requests are backed up by Product backlog items (PBIs) - this is not good and is something we must have in order to:
      a) Ensure all needed customer bug fixes are properly prioritized in developer team Product backlogs; remember that dev teams are mainly Scrum oriented
      b) Have hard evidence when and what bug fixes were requested right there in the TFS, easy for all teams to access and see, not scattered around in dozens of emails and forgotten about

   2) If a specific PBI hasn't yet been committed to, add new bug fix request there; this may be a change to how we did it so far (add bug fix requests only if the goal is not yet approved, otherwise create new PBI), but this change will increase efficiency and reduce your dependencies on development Product owner to approve new PBIs

   3) Don't exaggerate - ask for bug fixes for only the most troubling issues for our customers, or if it can directly help our revenue; other bugs will be fixed eventually. The more bug fixes in a patch you request, the longer it will take to be done

   4) Provide development Product owner with weekly product patch priorities; NOT individual bugs but actual product names and their TFS PBIs; this will help Product owner understand customer priorities better and correctly prioritize developer team goals to meet customer needs

   5) Reproduce a bug and put it into TFS before you ask for a patch; if you cannot reproduce it, QA are there to help

Developers
   1) You're self-managing spec-ops now - don't rely on a single person to guide each and every step of the way to deliver the requested patch to customers. There is no central synchronous checklist to follow and wait until someone else completes their action item

   2) Some of the bugs requested in the patch cannot be reproduced, require too many changes and time to do, or look as though they were by design? Contact Support now and clarify them ASAP, discuss each and every special case as these bugs are customer favorites and cannot be pushed aside

   3) Release notes not yet updated and reviewed? Don't wait for this step as it is an asynchronous operation - instead get the patch to the customer now

   4) Always cut a label before sending it to testing; this ensures the code base is pristine after the patch is quick-tested

   5) QA found a newly broken feature bug in the tested patch build? Fix it now and create a new label to send to testing, don't wait for explicit approval as the patch is not done until the customer starts smiling

   6) As soon as the label is quick-tested by QA, send the build link to Support; you don't need release notes, website content or COO, CTO, CEO to approve this - just get the patch over to the customer

   7) Once release notes are updated, update and rebuild the label, then it can also be promoted publically on the website. We determined that there are dozens of "silent downloaders" that do get these patches manually after they are posted on the website so we should help them as well

Monday, October 8, 2012

Comprehensive test plan - PACT

SMART goal provides much more internal team organization flexibility compared to SCRUM where Product owner defines clear individual goals and expectations; however increased flexibility without planning can also cause disorganization and make your life more complicated

There is no singular Product owner for SMART, but why not consider a weekly team agreement / weekly plan as the "Product owner" to clearly guide the "What's" (incremental goals, or "what needs to be done") in the team?

Define weekly Priorities, Allocation, Continuity and Thresholds plan / a PACT to guide you as a team

Note that below are guidelines and you need to define and send your own weekly PACT plan and table

Priorities
Every bug leads one step closer to accomplishing your SMART goal; however some bugs need to be found before others:
   1) What are the main test priorities you need to work on? Check Production schedule what products are expected to be delivered to testing; contact developer teams to get direct feedback if there are unplanned changes to the schedule you don't see; also remember JIT

   2) How to prioritize main product testing? Focus on product ROI: enterprise products first, then developer tools and finally community (free) tools

   3) Any ad-hoc test priorities? Test patches and engines before regular product releases as this can usually be completed quickly; engines are usually needed for a specific main product release

   4) Quick-testing: always break to check new installers and website content

Allocation
There are finite number of test engineers and so many products and features to test. Parallelism is our enemy as we don't need test summaries for 4 products at once but one test summary at a time as soon as possible
   1) Unless you have a strong reason how you can improve efficiency by splitting the team to work in parallel on different test deliveries, focus on everyone testing one deliverable at a time

   2) Make testing for a single deliverable feature set circular in order to reduce the number of false negatives (and increase the number of Zigs) - tester A tests feature A while tester B tests feature B; then tester A tests feature B while tester B tests feature A

Continuity
We can have between 1 and 5 product test rounds. The first test round is usually the one with the most low hanging Zigs to pick and to put in your Zig basket. However the fifth test round is as important as the first one even though "What" to test is different
   1) Plan for the longest first test round, especially if you have a completely new product to test; always specify how long will the testing last as there won't be second chances to [regression] test all features from scratch

   2) Push the testing into the next week as necessary but always specify why

   3) Subsequent test rounds should be short but long enough to cover all fixes and changes made by the developers since the last test summary

   4) Final round is always #3 (#5 for new products) - no matter what you find there, the product will be released so think twice before deciding how much time to spend on this one as you cannot extend the testing further

Thresholds
You have 3 new product builds to plan testing but how will you know when to stop testing one and move on to the next one?

Actually I'd like to hear some of your suggestions here and then I'll update this post; there are many ways to define thresholds in testing but also don't forget that you have a SMART goal that must be achieved each month as it is reset to 0

Testing the hell out of one product as it has easy to find Zigs and glancing through another one is not an option as we will easily detect false negatives for the latter one: bugs are always there since developers inadvertently ensure this is true, you just need to find them

Tuesday, October 2, 2012

Spin off blog - Developer central

Over the years we've been writing standards documents for developers for coding, CLI, GUI, explaining what to do and what not to do during development. Documents cover best practices and have some concrete suggestions; however documents are updated at most once per year and are fully read once, then forgotten

Technology keeps improving daily and individual teams and developers keep discovering new tips and tricks to stay ahead but in the end all this knowledge stays within a single team or worse with a single developer. We must change this and share our knowledge and experience

As new developers join the company I keep seeing repeating questions and repeating issues with newly written code - mentors spend hours of time to answer the same questions over and over again. Let's stop this trend now: http://apexdevcentral.blogspot.com/