Wednesday, December 26, 2012

Scrum tales - Part 10 - Epics

Our strict Scrum implementation acknowledges two work units:
   1) Product backlog items (PBIs): concise, finite, measurable goals with deliverables that are either Done or Not done based on a simple question "Did you <PBI title>?", defined and owned by the Product owner
   2) Tasks: atomic, weighted (estimated) work units assignable to a single Scrum team member, defined and owned by the Scrum team

How long can these work units be?
   1) PBIs must always be achievable within a Sprint's time (under one month) and Scrum team must never commit to a PBI without being sure they can complete it within the Sprint
   2) Tasks should be as short as possible and a rule of thumb is for each Scrum team member to have at least one task completed each Sprint day (daily deliverables)

What happens when we have goals that cannot be achieved within one Sprint's time or when goals are abstract/broad enough to give teams much flexibility to have to first figure out how to achieve them? Enter Epic

An Epic is a Product backlog item "on steroids" with the following differences:
   a) Epic is broad, abstract and cannot be always confirmed as Done or Not done based on a Did-you question
   b) Epic is done when the client (Enterprise Product owner in our case) say it is done
   c) Epic has no time limitation - it can be worked on for one week or for several months/sprints until the client need is fulfilled
   d) Epic is owned and defined by Enterprise Product owner
   e) Epic cannot have any Tasks directly defined under it, but instead it has individual measurable Product backlog items as means to accomplish it through one or more Sprints

How will we know that an Epic is really Done? When your client confirms that the needs have been fulfilled. The needs are usually to achieve a specific business objective / specific change in actionable metrics. See some examples next

Developers (Web developer team included)
Epics provide a good way for Enterprise Product owner to describe Production roadmap:
   A) "Provide a new product to competitively enter the Azure data compare market" - concise enough to explain the need but broad enough to allow Product owner to figure out technical details and step-by-step PBI deliverables with the help of the Scrum team(s)
   B) "Provide a new data restore product version to exceed comp performance" or "to achieve fully competitive state"

Marketing, SEO
   A) "Increase leads by 50%"
   B) "Establish company presence on a new major community network"

Operations
   A) "Increase lead conversion by at least 10%"
   B) "Make Scrum success rate efficiency 95% or higher"

SysAdmin
   A) "Establish 3 levels of company data backup"
   B) "Improve uptime of all TFS services to 99%"

Support
   A) "Boost RECOs by 10%"
   B) "Establish a new defensive support system to halve repetitive cases"

QA
   A) "Implement new test cases management system"
   B) "Fully automate testing of all products' command line functionality"

Thursday, November 29, 2012

Scrum tales - Part 9 - The Five Whys

When the sprint ends we expect all committed goals to succeed which unfortunately is not always the case. Scrum teams must meet to discuss what went wrong and to discover the flaws in the process vs. just toss the blame around

Scrum teams meet to conduct a Sprint retrospective meeting with the main purpose to identify and analyze the issues in the Sprint process and to figure out how to fix them in the future sprints. Sprint is an iterative process and no matter what goals fail there is always a reason why they failed and it is not due to one team or one person's fault but due to a flaw in a process; this underlying flaw is what teams almost always fail to identify and skip finding a way to resolve it in the future sprints

Let's reflect on what great companies already did before us and use it to our advantage - specifically the Five Whys lean manufacturing technique will help us explore cause and effect relationships underlying specific issues we encounter and to finally get to the root cause of a problem or in our case a failed sprint goal

How do the Five Whys work? The team meets to discuss an issue and iteratively asks five times "Why?" or more specifically "Why did the process fail?" followed by determining a Proportional investment how to solve the issue going forward. Best to illustrate this with an example applicable to Scrum and the everyday goals we have:

Developer team sprinted and Product backlog item saying "Get new product maintenance version approved for production" failed. During the Scrum team retrospective meeting, the following questions should be asked and answered:
   1) First Why
Q: Why wasn't the product approved for production?
A: Not all High priority bugs were fixed in time to get the approval
Proportional investment: This is not a preferred solution, but if it comes down to a time crunch, you will roll up your sleeves for a weekend and catch up

   2) Second Why
Q: Why weren't all High priority bugs fixed?
A: New bugs were discovered during the unplanned third static testing round
Proportional investment: You will always assume there will be maximum number of static testing rounds when planning a new Sprint and never raise everyone's expectations by committing to releasing the product if the goal is too risky

   3) Third Why
Q: Why did the product go into three static testing rounds?
A: The product was of lower quality than expected even though it is a maintenance release
Proportional investment: You will always plan ahead for better internal self-testing before sending a new build to a static test round to catch and fix the low hanging fruit before it goes over to QA and results in unexpected new bugs

   4) Fourth Why
Q: Why was the product of lower quality than expected for a maintenance release?
A: Some of the engine code was rewritten in between test rounds in order to fix a specific bug that a customer requested to be patched
Proportional investment: Always first raise a hand when you see large code changes are needed and review with Product owner vs. hacking away at previously tested code and in the middle of product testing, no matter if the Almighty asked for the patch in person

   5) Fifth Why
Q: Why was the engine code rewritten between during final product testing phase?
A: We have to fix ASAP all bugs forwarded over by Support team that customers need fixed, no matter if we're in final testing phase or not, we have team performance goal to achieve
Proportional investment: No, you don't; this means our system is flawed and the product will be delayed meaning all customers will have to wait or risk getting lower quality build for one specific bug fix. Let's make a rule not to patch anything during product testing phase and instead do it immediately after the release to production if deemed necessary

To help out with asking and answering the Whys, I will volunteer to be the Why Master for all Scrum teams meaning you need to invite me in all Scrum retrospective meetings if your sprint goals failed to quickly discuss how to improve our processes and prevent future sprint goal failures

Monday, November 19, 2012

Responsible diversification

...or simply put: how to have a cross-functional team without risking Sprint goal failure

The issue

On one hand, Scrum is all for having cross-functional and self-managing teams vs. narrow-specialists, while on the other hand each team member should be able to finish every goal and task within a Sprint no matter how complex it is; this appears to be the most common Scrum paradox and I'm seeing numerous articles and even entire books on how to solve it

We want to:
   a) Make Scrum teams fully self-managing
   b) Have employees of all qualifications, both Jr. and Sr. to focus on the same critical company goals

We don't want to:
   a) Isolate employees by making them run sprints on their own just because they cannot yet work on all tasks in the team
   b) Introduce valid excuses as to why Sprint goals weren't met
   c) Loosen up the Definition of Done and affect team deliverable quantity and/or quality

Solution

Our goal is to always KISS so to fix the mentioned paradox we'll introduce some flexibility by allowing Scrum teams to be collectively qualified to achieve all assigned goals; what does it mean for Scrum teams compared to the practice so far:
   1) Each team member is no longer needed to be able to perform all tasks in the sprint
   2) Each team must still be able to complete all team tasks with sufficient quality
   3) Tasks achievable by select team members only are to be treated as high risk tasks meaning they must be marked as such and must be prioritized within the sprint
   4) Each team must make an Internal redundancy plan for tasks achievable by select team members only and lay out the redundancy pairs in Sprint planning meeting

In practice - teams

   A) Support team can have one part of the team (Writers/Analysts) focus on writing, analysis and other high level tasks first while Product support guys focus on handling the actual customer support and lower level writing, posting tasks

   B) Core developer teams can directly integrate Jr. developers, have Sr. devs focus on new development design and complex bug fixes while Jr. devs work on low/medium complexity bugs and following specs

In practice - individuals

   A) Working exclusively on simpler tasks or taking ad-hoc tasks and insisting there is no more work in the sprint left for you when all low level tasks are done is an excellent way to not advance your career and eventually have your regular review result in DNE

   B) If you can work on all team tasks, you must still roll up your sleeves when there is critical low level / transactional work left; taking lower priority but more complex work in such cases is even worse than not working at all

Friday, November 2, 2012

Tune in to correct bug frequency

Not all bugs are created equal which means that some are "more equal" than others, especially those with higher Probability / Frequency rating; but how to determine the right frequency?

Here's what we found recently:

1) One of the main product features doesn't work at all when used on a specific database / a specific SQL script; this is High severity as the core feature is broken and this is always reproducible on this specific database / script it will remain a High bug?
No, the frequency here is Sometimes or even Rarely depending on the probability that customers will use that specific database or script

2) Product functionality worked perfectly during the first few repeated tests, but then it just stopped working. After OS reboot, the same scenario repeated; this is High bug as the feature is not working but as not always reproducible it will be Medium?
In such cases you need to stick a bit more with the bug and focus on isolating the exact individual steps that led to feature no longer functioning correctly. In case you isolate the bug cause better to reproduce it more Often / Always, it may even be a High bug

3) After changing some default system settings / stopping product background service / installing 3rd party kernel mode driver, our product stops working correctly and throws errors = High severity; this is easily reproducible in 100% of cases when following the exact steps which means frequency is Always?
Ask yourself how many customers will do what you just did - hack or tweak a default system or software installation or install that specific 3rd party software; the answer is very few, including those power users who like to play with everything - this is the actual probability of someone repeating those steps and it matches Rarely / Sometimes frequency corresponding to a Low/Medium bug

4) There are several UI standard violations on the main Options dialog as standard buttons are missing - this is Low severity and it is obviously reproducible always so this is a Medium bug?
Yes, but your reasoning is not right - although a Low severity issue it will be encountered by majority of the customers as present on high profile product dialog which then makes the frequency Often / Always

5) Several icon inconsistencies on the main product window when compared to other products - Low severity but this will be seen by all customers as this is on the main product window so Medium bug?
Recheck this - icons are on the main window but are you that certain that the majority of the customers will actually notice slightly different icons and text in the main menu / ribbon bar compared to other products? I say this will be noticed Rarely if at all

To summarize: frequency/probability should be interpreted with some thought vs. literal assumption that if the icon is "always different" or the button is "always missing" it corresponds to 100% or Always frequency

Thursday, November 1, 2012

Optimize for success

"With great power comes great responsibility" - Stan Lee

Self-management can be a double-edged sword without discipline and a plan. Many teams including QA now have SMART goals to guide you, however you must guide yourself on a daily basis in order to achieve monthly SMART goal expectancy

Let's focus on the latest real-life use case:
   1) Testing weekly plan was defined: use up to 2 hours per day to test patches and focus on testing our new enterprise product for the remainder of the day
   2) Testing weekly plan was refined and approved - by the end of this week we'll have 2nd testing round of enterprise product wrapped up
   3) Week is almost over, however enterprise product 2nd testing round has just started and will be postponed for 3-4 days

What happened? Here's what I heard:

   A) "We had too many patches to test and this took a lot of time"
These patches were preplanned in the weekly test plan - it is not an excuse to violate weekly test plan. Our goal isn't to fully regression test each patch and verify its readiness for production but to focus on verifying those few (usually 1-2) bug fixes, spot-test core functionality and get it out to the customers who requested it

   Solution: dedicate a fixed chunk of time for testing each patch build, up to 1 hour; first verify fixed bugs then spot-test core functionality and finally send the patch testing summary. If the whole QA team found no core functionality issues during this time, it is highly unlikely that the single customer who requested the patch will find any; if there are new issues, we'll quickly re-patch without wasting much time

   B) "We had too many Support team forwarded cases that we stuck with for a long time in order to not forward them to developers"
What I'm seeing is ~82% of Support forwarded cases handled within the QA team which is way above 50% SMART goal; although this will be measured more precisely soon and is also an important goal, you must optimize your time better - taking 4-5 hrs to stick with a single support issue is an overkill and will inevitably cause your other SMART goal (Zig score) to suffer and our products to be late to production due to testing delays

   Solution: dedicate a fixed chunk of time for sticking with a support issue, especially if you are over the 50% SMART goal expectation for the month. Discover your own diminishing return point and use it to balance out the SMART goals when you make no progress with the support case at hand

   C) "We had many ad-hoc issues that piled up and took more time than actual testing, developers needed help with specific bugs, new team member needed guidance, there were team planning meetings, bugs needed priority corrected as we updated severity guidelines, Skyfall is premiering in the movies this week"

   Solutions:
   a) Dedicate a fixed chunk of time for team planning meeting (learn from Daily scrum) - 30min max
   b) When creating bugs, explain them in more details so devs don't ask you to clarify them - this will save time for both you and the devs on the long run
   c) Don't go to see the new James Bond movie until you have achieved daily SMART goal of at least 50 Zigs
   d) Before finishing a workday stop for a minute and ask yourself: "Have I achieved all my daily SMART goal expectations and are we on track with the weekly plan?" If the answer is yes, go ahead and have a scary night watching Paranormal Activity 4

Tuesday, October 30, 2012

Scrum tales - Part 8 - Sprint retrospective

Although Scrum teams followed guidelines and conducted all necessary meetings at the end of a Sprint, it seems that no one fully grasped the idea behind the Sprint retrospective meeting and its core purpose

What is a Sprint retrospective meeting?
Sprint retrospective meeting is a Scrum team meeting initiated and held by the ScrumMaster at the end of each Sprint. In our strict Scrum implementation, three main questions are asked by the ScrumMaster and answered in cooperation with the entire team:
   1) What worked?
   2) What didn't work?
   3) What will we do differently?

These questions are similar to those asked during the Daily scrum meeting, why do we repeat them at the end of the Sprint?
Questions may look similar but they are referring to completely different aspect of the Sprint - its iterative process and internal team organization
During the Daily scrum meeting the team is focusing on individual goals (PBIs) and tasks in the specific Sprint; during the Sprint retrospective meeting the team should be focused on the process that led to PBIs being successful or not and how to make all future PBIs always successful going forward

If we cannot specify concrete PBIs when answering Sprint retrospective meeting questions, what should we talk about?
   1) What worked?
   Discuss and list all the changes in the internal team organization, Sprint planning and ScrumMaster activities that were different compared to the team's previous Sprint; i.e.:
      a) We dedicated more time to make closer task estimates during the Sprint planning meeting
      b) This time we split PBIs for research and planning, and PBIs for development tasks
      c) We identified only the most critical ad-hoc tasks and prioritized them over the Sprint

   2) What didn't work?
   List all the obstacles identified from the moment you started planning the Sprint to the final Daily scrum meeting, especially issues that caused some Sprint deliverables (PBIs) to not be completed on time; i.e.:
      a) We had high risk PBIs that were underestimated at first but we didn't prioritize them in the Sprint
      b) ScrumMaster wasn't persistent enough to unblock externally blocked goals
      c) Our task estimates didn't cover for unexpected team member absences

   3) What will we do differently?
   This is the most important question that you must always discuss and answer even if your previous Sprint has been 100% successful as there is always something you can improve in the process; i.e.:
      a) We will segment tasks better so that more team members can work on the same PBI in parallel
      b) ScrumMaster will review all team blocked tasks daily and act to resolve them even if it means to send a single email follow up every day
      c) We will work with Product owner to define PBIs that are achievable during one Sprint duration vs. moving them across multiple sprints

Is Sprint retrospective meeting the same as Sprint review meeting and can we merge them?
No, they are different meetings. Sprint review is designed for the Scrum team to demo the Sprint deliverables to the Product owner while Sprint retrospective is primarily team-only meeting aimed to adapt and improve the sprinting process within the team and to make all future Sprints 100% successful

Thursday, October 11, 2012

Patching zen

We must be able to find a perfect balance between administration and efficiency in order to achieve the goal - deliver customer needed fixes as soon as possible. Let's address specific use cases that were all encountered in under 14 days' time

Support
   1) Each week Support check status of customer requested bug fixes, gets patch ETAs from developer teams - this is good and will result in good hard info that will be part of developer team SMART goals. However not all patch and bug fix requests are backed up by Product backlog items (PBIs) - this is not good and is something we must have in order to:
      a) Ensure all needed customer bug fixes are properly prioritized in developer team Product backlogs; remember that dev teams are mainly Scrum oriented
      b) Have hard evidence when and what bug fixes were requested right there in the TFS, easy for all teams to access and see, not scattered around in dozens of emails and forgotten about

   2) If a specific PBI hasn't yet been committed to, add new bug fix request there; this may be a change to how we did it so far (add bug fix requests only if the goal is not yet approved, otherwise create new PBI), but this change will increase efficiency and reduce your dependencies on development Product owner to approve new PBIs

   3) Don't exaggerate - ask for bug fixes for only the most troubling issues for our customers, or if it can directly help our revenue; other bugs will be fixed eventually. The more bug fixes in a patch you request, the longer it will take to be done

   4) Provide development Product owner with weekly product patch priorities; NOT individual bugs but actual product names and their TFS PBIs; this will help Product owner understand customer priorities better and correctly prioritize developer team goals to meet customer needs

   5) Reproduce a bug and put it into TFS before you ask for a patch; if you cannot reproduce it, QA are there to help

Developers
   1) You're self-managing spec-ops now - don't rely on a single person to guide each and every step of the way to deliver the requested patch to customers. There is no central synchronous checklist to follow and wait until someone else completes their action item

   2) Some of the bugs requested in the patch cannot be reproduced, require too many changes and time to do, or look as though they were by design? Contact Support now and clarify them ASAP, discuss each and every special case as these bugs are customer favorites and cannot be pushed aside

   3) Release notes not yet updated and reviewed? Don't wait for this step as it is an asynchronous operation - instead get the patch to the customer now

   4) Always cut a label before sending it to testing; this ensures the code base is pristine after the patch is quick-tested

   5) QA found a newly broken feature bug in the tested patch build? Fix it now and create a new label to send to testing, don't wait for explicit approval as the patch is not done until the customer starts smiling

   6) As soon as the label is quick-tested by QA, send the build link to Support; you don't need release notes, website content or COO, CTO, CEO to approve this - just get the patch over to the customer

   7) Once release notes are updated, update and rebuild the label, then it can also be promoted publically on the website. We determined that there are dozens of "silent downloaders" that do get these patches manually after they are posted on the website so we should help them as well

Monday, October 8, 2012

Comprehensive test plan - PACT

SMART goal provides much more internal team organization flexibility compared to SCRUM where Product owner defines clear individual goals and expectations; however increased flexibility without planning can also cause disorganization and make your life more complicated

There is no singular Product owner for SMART, but why not consider a weekly team agreement / weekly plan as the "Product owner" to clearly guide the "What's" (incremental goals, or "what needs to be done") in the team?

Define weekly Priorities, Allocation, Continuity and Thresholds plan / a PACT to guide you as a team

Note that below are guidelines and you need to define and send your own weekly PACT plan and table

Priorities
Every bug leads one step closer to accomplishing your SMART goal; however some bugs need to be found before others:
   1) What are the main test priorities you need to work on? Check Production schedule what products are expected to be delivered to testing; contact developer teams to get direct feedback if there are unplanned changes to the schedule you don't see; also remember JIT

   2) How to prioritize main product testing? Focus on product ROI: enterprise products first, then developer tools and finally community (free) tools

   3) Any ad-hoc test priorities? Test patches and engines before regular product releases as this can usually be completed quickly; engines are usually needed for a specific main product release

   4) Quick-testing: always break to check new installers and website content

Allocation
There are finite number of test engineers and so many products and features to test. Parallelism is our enemy as we don't need test summaries for 4 products at once but one test summary at a time as soon as possible
   1) Unless you have a strong reason how you can improve efficiency by splitting the team to work in parallel on different test deliveries, focus on everyone testing one deliverable at a time

   2) Make testing for a single deliverable feature set circular in order to reduce the number of false negatives (and increase the number of Zigs) - tester A tests feature A while tester B tests feature B; then tester A tests feature B while tester B tests feature A

Continuity
We can have between 1 and 5 product test rounds. The first test round is usually the one with the most low hanging Zigs to pick and to put in your Zig basket. However the fifth test round is as important as the first one even though "What" to test is different
   1) Plan for the longest first test round, especially if you have a completely new product to test; always specify how long will the testing last as there won't be second chances to [regression] test all features from scratch

   2) Push the testing into the next week as necessary but always specify why

   3) Subsequent test rounds should be short but long enough to cover all fixes and changes made by the developers since the last test summary

   4) Final round is always #3 (#5 for new products) - no matter what you find there, the product will be released so think twice before deciding how much time to spend on this one as you cannot extend the testing further

Thresholds
You have 3 new product builds to plan testing but how will you know when to stop testing one and move on to the next one?

Actually I'd like to hear some of your suggestions here and then I'll update this post; there are many ways to define thresholds in testing but also don't forget that you have a SMART goal that must be achieved each month as it is reset to 0

Testing the hell out of one product as it has easy to find Zigs and glancing through another one is not an option as we will easily detect false negatives for the latter one: bugs are always there since developers inadvertently ensure this is true, you just need to find them

Tuesday, October 2, 2012

Spin off blog - Developer central

Over the years we've been writing standards documents for developers for coding, CLI, GUI, explaining what to do and what not to do during development. Documents cover best practices and have some concrete suggestions; however documents are updated at most once per year and are fully read once, then forgotten

Technology keeps improving daily and individual teams and developers keep discovering new tips and tricks to stay ahead but in the end all this knowledge stays within a single team or worse with a single developer. We must change this and share our knowledge and experience

As new developers join the company I keep seeing repeating questions and repeating issues with newly written code - mentors spend hours of time to answer the same questions over and over again. Let's stop this trend now: http://apexdevcentral.blogspot.com/

Friday, September 28, 2012

Back to the test drawing board

We've been cutting labels and sending products to testing for years - both developers and QA know the rules implicitly without even looking down at written guidelines. What scared me the most this week is that when system rules are violated and there is a massive confusion on both developer and QA sides, one of the arguments was "we've been doing it like that since I joined the company"

Nobody likes reading the rules, operation manuals and workflow guidelines, but still they are there for a reason. If you forget about them, you'll be hearing from me soon

Let's get a few things straight:

   1) Devs, there's no more Continuous testing; you cannot just send in a new major build without some internal testing and expect to have QA determine if the product can start or not, then find/fix a few bugs each day until the build is stable for real testing; instead conduct a few days of internal testing (plan it in your Sprints) to pick the low hanging fruit, resolve critical issues immediately (product doesn't start, main features don't work) and write down all non-critical ones to forward to QA along with the first label to log down to bug tracking system

   2) Devs, make sure you always send modified product version to testing; we cannot have two builds with identical versions that are functionally different just because engine takes too long to rebuild. Consider updating the way engines are referenced and versioning strategy, suggest solutions proactively; Everyone leads - proactively identify problems and solve them, don't ask questions but make recommendations

   3) Devs, always send an official request for testing when a new build is ready; specify testing areas as detailed as possible even when it means you have to roll back your sleeves and write detailed guidelines for back-end high-tech stuff that QA might not understand on their own; Everyone shares - communicate widely and effectively, educate your customers

   4) QA, it IS your job to determine if the product can start or not; Everyone serves - treat everyone as a customers and take personal ownership of their issues vs. "it's not my job" attitude; if you got a version that doesn't start at all assume that it did work on devs' side and that it is your job to isolate on which systems product works and on which it doesn't, then report Bugs

   5) QA, report all found Bugs professionally following the established procedure without sending additional emails with built up emotions and no concrete analysis or actionable suggestions

   6) QA, use your right to cut testing short if there are too many obvious Critical issues found, however always follow the procedure and send in testing summary at the end and allow devs to fix the Bugs to surprise you in the next test round vs. hate-mailing about the build quality

Upside from this week's confusion related with our new product is that almost everyone managed to transition to constructive discussions with actionable suggestions lead by analysis and pros/cons; we need more of this in all aspects of our work

Tuesday, September 25, 2012

Think, simplify, consolidate

We must consolidate new Bugs and reduce administrative overhead and by doing so help all teams when dealing with Bugs. Having multiple Bugs that all relate to the same cause, result in the same violation, expose the same incorrect behavior will allow for them to be fixed independently and can cause inconsistencies or needlessly take more time to create, review and update

Some examples encountered recently:
   Issue 1: GUI standards issue: several menu / ribbon items have inconsistent casing and are named differently across different products
   Bug 1: Have only one Bug for each of the products and list all related UI issues and standard violations within; this way all issues will be fixed at one time vs. risking to introduce new inconsistencies

   Issue 2: Product UI has multiple issues when using 120 DPI system font resolution
   Bug 2: Have only one Bug for the product at one time vs. one Bug per dialog or even one Bug per visual issue; there is no reason why some should be fixed and some not - they must all be fixed in order to consider having support for specified system font resolution

   Issue 3: Product stops responding during a specific operation (any general perceived performance issue)
   Bug 3: Have only one Bug explaining how to reach the non-responsive state; don't create individual Bugs for everything that no longer works after the product is already in the non-responsive state as the core issue must be fixed to resolve this

   Issue 4: Multiple usability or obvious design issues
   Bug 4: Have one Bug per group of issues that should all be resolved together, i.e. per UI dialog or group of dialogs where they are discovered

Obviously it's not possible to list all cases encountered so far and all possible cases that will be found in the future. Note the following best practices and apply:
   1) If Bugs found should be resolved all together to ensure complete functionality, consolidate

   2) If resolving found Bugs individually (even if they are all reported with equal severity) can produce inconsistencies, consolidate

   3) If just by grouping similar issues in a single Bug you can save much time vs. explaining them individually, consolidate

   4) If there are multiple unwanted consequences resulting from a single cause, consolidate

   5) When in doubt, always think and try to simplify: "Think critically, objectively and with full information before you answer, act or deliver"

Friday, September 21, 2012

Scrum tales - Part 7

Almost all developer teams have encountered this issue with their Sprints - how to define Sprint tasks and estimates when it is almost impossible to know all the individual steps needed upfront to get to the defined PBI goal - i.e. you just don't know where to start?
To some extent this also applies to other teams who cannot define their PBI individual tasks upfront during the Sprint planning meeting, so please read ahead and don't "guestimate" too much

1. Understand your goal / PBI - what exactly do you need to accomplish?
   a) Improve performance - where, by how much %, can this be perceived performance improvement vs. hacking open Windows drivers?
   b) Fix a stubborn Critical severity issue that you have no idea what is causing it - is there a workaround you can implement instead, can you at least fix it so that it is at least no longer considered Critical severity, can you research a bit to define how to proceed next?
   c) Build a new product or product feature - do you at least know the feature set needed if you don't have full specs or UI mockups?

2. Research - one thing that was rarely used or was used incorrectly are research tasks:
Yes - you can add research tasks to closer define actual tasks that will lead to the goal accomplishment
No - you cannot have an always open research task to work on during the whole length of the Sprint

Start by creating a finite research task that will allow you to figure out your exact next tasks towards the PBI endgoal. Limit yourself in research which must result in:
   1) New research tasks that are closer defined and can be shared amongst more team members
   2) Specific finite tasks to start working towards the goal

Specifically related to the hypothetical goals mentioned above:
   a) To improve performance start by defining tasks to:
   - test current version vs. previous versions of the product and isolate speed difference
   - test comps and isolate speed difference
   - research code to isolate bottlenecks
   - research bottlenecks to prioritize them starting with easy fixes (perceived performance improvements) to complex driver hacks

   b) To fix a bug when you have no idea what is causing it, define tasks to:
   - research code to find workarounds or at least new more specific research areas
   - research new specific areas for possible fixes and list them from easiest (quick workarounds) to most complex ones (redesign engines) and estimate each one

   c) To build a new product knowing only a closed feature set, define tasks to:
   - research comps and create new design tasks
   - research features functionality how to technically implement them and what all will be needed to do this; define design tasks

3. Define achievable tasks with real estimates

Scrum is different from the Waterfall model in that it doesn't require you to know all the How specifics before starting to sprint. Scrum team must clearly understand the goal (PBI) but can figure out how to achieve it during the sprint itself

It is important however to note that research tasks must also be finite and they must result in new research tasks or concrete development tasks leading you one step closer to the goal

Tuesday, September 11, 2012

Scrum tales - part 6

Finally managed to finish a Sprint and you're happy that all/most/some of the goals (PBIs) have been completed successfully? Check everything from the list below before saying that again:

1) Deliverable specified in the PBI has been actually delivered to the intended customer (internal or external) and the Product owner has seen this. I.e.:
   a) Product new build has been approved for production
   b) Enough new bugs have been found for a product build and testing summary has been sent
   c) Patch build has been sent to Support team
   d) All customers/leads have received intended emails

2) New or updated document required by the PBI has been reviewed by everyone with vested interest (future users of the document) and all have signed off on it; Product owner has also seen the document and has approved it

3) Content created as specified in the PBI has been reviewed by all higher level owners. I.e.:
   a) Release notes have been technically reviewed and approved by CTO and content rules-wise by Operations manager
   b) Article is signed off by copy-editor and by the Tech marketing team as usable for SEO

4) All feedback received back from the reviewers / Product owner for the deliverable (testing feedback, emails sent out, created document, etc.) has been incorporated in the work and resubmitted; Product owner had no further comments

If you answered Yes to all of the above, you can say with confidence that your Scrum team's Sprint goals have been successfully accomplished - congrats to your team for doing the right things right way

If there is at least one No answer and the Sprint is over, then the goal (PBI) is not done and must be specified as such in the Sprint review. How to fix this:

A) Move the unfinished PBI into the new Sprint; work with your team to add corresponding tasks to the goal to be able to answer positively to all of the above before successfully finishing the goal

B) Prevent the same situation from happening again by reviewing completed work with Product owner before the end of the next Sprint

Monday, September 10, 2012

JIT testing, or what to test when you think there is nothing to test

More than once it happened that you have a goal to test a new build of a product but it turns out the build is not available in time; you could:
   a) Test a product that has lower priority in your current Sprint
   b) Write some obscure automated test scripts for a product that isn't even ready yet and you haven't seen it at all but you did have 1 training session where you understood squat about the product
   c) Play Windows Solitaire until the build gets ready

Choosing any of the above is not the right answer and will quickly get you off the train where we're headed as a company. What to do?

Enter JIT testing
Your goal says you need to find bugs score in minimum of 250 and verify resolved bugs for a new product build. Ok - you cannot verify the resolved bugs as you still don't have the new product build, this part of the goal is externally blocked. However you can find bugs score of minimum 250 by testing previous product build you have available:

   1) Take the latest product build you have available; this can be a public build approved for production or it can be previous testing round build that was rejected due to a few found Critical bugs; it can also be a build recently created internally by the automated nightly build script - check the FTP yourselves and check the build version - don't wait for explicit notifications about such builds

   2) Test the hell out of the build and create new bugs that are worth 275 score points (10% over your goal); chances are that all the new bugs you find will still be there in the official new build when it is eventually submitted to testing

Once the new product build is ready, verify the existence all the bugs you found during the testing of the previous available build; I bet that most of the issues will still be there and that in the worst case scenario you may only need to find a few bugs only to exceed the score goal of 250

Other things to do while "waiting"
Don't forget that you'll still get ad-hoc requests to handle new installer testing, new website content to be checked - these are all higher priority and can be handled in parallel with the active Sprint

Developer teams will also create Patch testing PBIs. If a patch testing PBI contains Critical severity bug fixes, test it now and don't let it linger; if a patch testing PBI contains only High severity bug fixes, also test it now - there are only a few bug fixes per patch to test anyway so it won't take too much of your ad-hoc testing/support time

Things NOT to do while "waiting"
What I don't want to see is QA mailing excessively developer teams asking "When will we get the new build ready for testing" - such questions should be directed to QA Product owner (CTO) or not asked at all unless you are planning your next Sprint

Friday, September 7, 2012

Scrum tales - part 5

Patch requests have started piling up in the form of Product backlog items (PBIs) in developer team Product backlogs. Some developer teams have a new sprint starting out soon so incorporating these new PBIs into new Sprint is easy. However some developer teams have just started sprinting; what do we do now:
   a) Make customers wait to get their patches until the next Sprint in 2+ weeks so we can follow our Scrum development process rules
   b) Close one eye and make a small exception within the Scrum rules, add the new PBIs to existing sprint

The answer is c) neither of the above - we need to both get patches to the customers in a timely manner and not violate the Scrum rules. Here's how...

Case 1: A new developer team Sprint is about to start in under a week's time
The answer is simple - just get the new patch request PBIs approved and prioritized on time by your Product owner and incorporate it in the new Sprint

Case 2: A new Sprint has just started and it will be another 2 weeks until it is finished; approved PBI requires a patch containing High severity bug fixes only
This means that the patch will take up to 3 weeks to be finished but it will be ok as none of the defects are Critical severity in the first place
Also use common sense - if you have 2 simple High severity defects to patch which would take a few hours of ad-hoc time, do so now and don't let it linger on for weeks

Case 3: A new Sprint has just started; approved patch requesting PBI contains Critical severity bug fixes that can't wait for the next Sprint
Pause the work on your Sprint and get this patch done ASAP; you already have allocated ad-hoc time for such work outside of the Sprint time


Ad-hoc working Q/A

Q: I've spent my 2 hours of allocated ad-hoc dev time for today; should I go back to my Sprint tasks even though Critical severity bugs patch isn't finished?
A: No - focus on completing one goal at a time; this means you should work whole day if needed to get that ad-hoc patch PBI wrapped up before going back to the Sprint tasks

Q: We're starting work on ad-hoc PBI - do we need to define individual tasks in the team and possibly make a separate Sprint for this, then sprint two Sprints in parallel?
A: Don't introduce complexity when there is none. The reality is you are working on 2-3 ad-hoc bug fixes to get the patch ready; these should show up as Bug fixing tasks in your Daily status report, i.e. "PBI #12345 - Create patch for Refactor 2012 R1 with 2 bug fixes; TFS #8943 - High - Bug name"

Q: I'm going to work on a Saturday, but should I focus on my ad-hoc tasks since weekends aren't calculated in the Sprint time?
A: Assume that working weekends or holidays are just like any other working day and take your priorities in order. Sprint comes first unless there are higher priority ad-hoc tasks such as a patch requesting PBI for Critical severity bugs

Wednesday, September 5, 2012

Patch obstacles course

Before clarifying a few use cases below, make sure to (re)read the clear set of Patching rules we already have defined

1) Support must make sure to clarify newly created PBIs which represent goals/requests for patches:

   a) Specify product name and version that must be patched; this doesn't mean to specify product version where bugs were found but to specify actual version of the product that is currently available to customers and must be patched
   b) List all defects individually that need to be patched - these can be High or Critical severity bugs only - each defect's number, severity and name must be specified in PBI description; OR you can simply link TFS bugs to this new TFS PBI
   c) Send FYI email about created/updated PBI to developer team AND to Product owner; if you send to developer team only then they will have to pull Product owner's sleeve to get this approved which introduces more emails into the otherwise direct process

2) Developers must review the new PBI immediately; then:

   a) Know your Scrum rules – you cannot put a new and especially unapproved PBI in ongoing Sprint – do so and you will violate core Scrum rules which is primarily bad for your team as your sprint may fail to achieve original goals
   b) If PBI contains Critical severity defects to patch, estimate if you will be able to get it done in week's time per rules by waiting for the next sprint; if not, you have ad-hoc time allocated for fixing such Critical issues outside of the regular sprint
   c) If PBI contains High severity defects only, work with Product owner to get it approved in time for inclusion in the immediate next Sprint
   d) Support team doesn't care what code branch you will work on, how to implement the same fixes in multiple branches, etc., they just need the patch build delivered per rules

Friday, August 31, 2012

Learn how to patch in 3 easy steps

If you're wondering who owns the product patches, who determines when a patch will be done, what will the patch cover and how it will be delivered, the answer to all these questions is summed up in one word - Support

You don't depend on CTO Development or Operations manager to forward the requests and make developers do the actual work of creating patches. Simply use the Scrum process we have set in place and get your patches delivered just the way you like them - medium rare or well done

Below you will find 3 easy steps to help you help customers

1. Identify the need
As you are directly in contact with customers you can feel their pain and determine what issues trouble them the most; no one else can do this for you, not CTO, not QA, and definitely not developers who created the issues (although inadvertently) in the first place

Ask yourself the following questions:
   a) How many customers have been affected?
   b) How serious is the bug at hand? (see How to determine your own bug severity)
   c) Does the identified bug have a workaround you can offer now to the customer?
   d) How fast does the customer need the fix?
   e) How many "invisible" customers who didn't contact you could have been affected by the bug?
   f) How brand-damaging is the bug?
   g) Will patching the bug now help the company to keep existing customer or gain a new one?

Compile and maintain the list of bugs-for-patching candidates daily; make sure you include only bugs that customers actually need. Remember: not all bugs are created equal - a High severity bug that has a workaround isn't necessarily instant candidate for patching

2. Make the request
Now that you have a neat list of one or more bugs that obviously need to be patched ASAP, create a patch request for corresponding product: just add/update a Product backlog item directly in the product-owning developer team's Product backlog

If you are requesting a new patch, create a new Product backlog item and describe in details what exactly you need delivered - specify all bugs individually, their ID, severity and description

If you have found additional bugs for the same product that need patching, append them to previously created Product backlog item if it is still open

Send an FYI to Product owner and the developer team about added / changed Product backlog item in their Product backlog

3. Deliver
You've done your job - don't worry if or when will you get the actual patch sent to you, this is now up to the developer team (they also have a system and a set of rules to follow). However for reporting needs you should still maintain a list of all patches you requested and when you requested them

As soon as you receive the patch with requested bug fixes, directly contact the customers who reported the issues and make them happy


Q/A

Q: I need to get a patch for this Medium severity bug as customers desperately need the fix. However rules say I can only request a patch for Critical and High severity bugs?
A: Why do customers desperately need the fix? If this is so obvious, are you absolutely sure your bug should be Medium severity in the first place?

Q: I requested Critical severity bugs patched and the customer needs this yesterday. Developer team just started a new two-week sprint without the patch goal included so this patch won't be done in another two weeks - can I yell at developers to speed this up?
A: No - don't yell and don't go pulling on CTO or Ops manager skirt. Developers also have a set of rules to follow and one of the rules is that Critical bug fixes must be delivered in a week's time even if it means working outside of the Sprint during allocated ad-hoc time. You'll get your patch sooner than you think

Q: I always request new patches each Friday once I build up a list of obvious patch candidates but customers don't want to wait 3 weeks to get the patch. What can I do?
A: Start by requesting new patches as soon as you identify bug candidates for patching, not once per week - there's nothing limiting you to do this daily if needed

Thursday, August 30, 2012

Scrum tales - part 4

As a Scrum team you just finished your first Sprint and need to start a new one? Look no further and see below

Wrapping up a Sprint

1- Sprint review
At the end of the Sprint it is important to present the results to your Product owner in a Sprint review meeting. This meeting is informal in nature so to speed this up just send in Sprint summary email specifying all Product backlog items (goals) and their status:
   a) Done - good, make sure that Product owner can see the results
   b) Not done - explain why the goal wasn't achieved

In the case of #b, reasons can vary from having a surge of ad-hoc tasks interfering with the Sprint priorities, goals being blocked by external impediments, to Scrum team members being abducted by aliens; in any case the reason must be clearly explained. Having clear reasons stated will help both Product owner and Scrum team to adapt future Sprints to prevent this from happening or at least to plan ahead: plan more time for ad-hoc priority tasks, work out the impediments in advance, or prepare supplies and go underground in case of an alien invasion

Important: unfinished Product backlog items must be moved into the new Sprint if they are still of high enough priority and be 100% finished there

2- Sprint retrospective
This meeting is organized by the ScrumMaster and whole Scrum team should attend. It must cover the answers to the following questions:
   1. What worked?
   2. What didn't work?
   3. What will we do differently?

Log answers to the above questions in your Scrum tracking system and apply them in the next Sprint. It is necessary to adapt your self-organization as a team to prevent or predict impediments and make the next Sprint achieve all defined Sprint goals


Starting a new sprint

1) Make sure your team Product backlog is up-to-date; if you have uncertainties about Product backlog items or their priorities contact your Product owner early

2) Change ScrumMaster in the team - this is mainly up to your team to decide, but changing ScrumMaster will allow you flexibility and make sure anyone can handle the responsibility. It is not an excuse to stop sprinting if specific team member who knows how to be a ScrumMaster is away

3) Conduct a Sprint planning meeting with the whole team and prepare new Sprint backlog. Although Sprint backlog is ownership of the Scrum team and Product owner shouldn't interfere, still send the final variant in an email for Product owner to see and to at least have a reference when discussing any future related issues

Wednesday, August 22, 2012

Self-help book: Define your own defect severity

You've found a new bug but you aren't sure what the correct severity for the defect is? Welcome to the "This is no exact science" club

Here are some general hints how to solve this question that I already shared with some of you over IM:
1) Treat Bug severity standards as guidelines - see the first line in the document, it clearly says that the document is written as guidelines and that ultimate responsibility for bugs you find is yours

2) Worried about how will Critical severity affect production, developers? Are you also worried about global warming and dying rainforests? If so, how does this help you define the severity of the bug you just found? The answer is: it doesn't

3) Wondering why your Low severity defects describing features not working still aren't fixed after two months but are considering to make this one Low severity as well? Rethink this again

4) Put yourself in the customer's shoes. What is your reaction to the newly found bug?
   a) I don't like the product and I won't use it
   b) This is very annoying, I won't use the product because of it... much
   c) I can't use this feature as I wanted to but I see the workaround exists - kind of annoying but I'll get over it
   d) I don't frequently click in the top right corner 3 pixels next to the Ok button and cause the mouse pointer to glow red - I'll forget about this by tomorrow

5) Make sure that you've put yourself into 80% of the customers' shoes - if you are thinking as someone who used to code Commodore 64 games in assembly back in the 80's and know how to get Linux distro kernel version, then a chance is this issue may only be visible to you

6) Contact your QA team members to see if the apparently Critical issue is easily reproducible by at least one more team member; maybe your VM OS stopped working after months of abuse

7) Devs insist on reducing the bug severity you've set and you're scared, best to comply? No - stand by your decision especially with facts you determined before defining the severity

8) Severity can go both ways, both you and devs have solid facts about the case. Contact analysis team for recommendation. However this should be an extremely rare case if you follow #7 above

9) Still not sure? Notice that I haven't answered any of the questions above with definite answers, when to use which bug severity. This is entirely up to YOU. As you have the ultimate responsibility for quality of our products and ApexSQL brand in general, you also have all the power needed to affect the quality with bugs you find

Note: if devs are giving you hard time, let me know directly ;)

Scrum tales - part 3

Let's get back to the Scrum basics for a second - I'm seeing recurring issues regarding creation of Product backlogs and Sprint backlogs:
1) My team's Product backlog items are concise and can be easily understood by Product owner:
   a) "Review the handbook"
   b) "Implementing new CRM system"
   c) "Post 3 new articles each day"
Our team will excel during the next Sprint after Product owner sees this

No - you're pretty much toast after Product owner sees such undefined and immeasurable Product backlog items - all will be rejected. Don't worry, I got your back - here's how to fix this:
   a) Don't just 'review' but instead 'review and update' or 'review and send proposal'
   b) Don't work indefinitely, i.e. 'implementing' - instead just 'implement'
   c) Don't 'post N articles each day' - instead 'post X articles total' and define Sprint tasks how you will do this (per day, per week, divide among team members) - this is up to you to achieve the goal

2) We're planning a Sprint but some of the tasks are pure guestimates so we'll just leave them without any estimates in the Sprint backlog and move on

If you do that, how will you know that your Sprint goals are achievable for the time allocated for the Sprint? I know that Sprints are 'owned' by teams and ScrumMaster is only making sure Scrum rules are followed - however how can Sprint burndown chart be correct if you have no task estimates in the Sprint?
Instead you can do the following:
   a) Plan ahead for complex and large new Product backlog items - create new ones with higher priority that will allow you to first research and learn about the technology, what needs to be done and how long it will take
   b) Estimate with the help of the entire team - if you need 3hrs and your team member needs 6hrs, that means the task shouldn't be less than 4.5hrs
   c) If the estimates are wrong, update them later - that is why they are called 'estimates' and you update Remaining Work for each active task on a daily basis

Scrum tales - part 2

More Scrum questions are showing up - I'm covering top ones here again for all to see:

1) Can I add new tasks to a Sprint after it has started?
Yes - you can definitely add Tasks that you find along the way for goals (Product backlog items) that you previously committed to. Tasks must be estimated, and if they are of higher priority than some existing tasks previously added to the Sprint it might cause the bottommost ones to be squeezed out and moved into the next Sprint

2) I have several Product backlog items that are of high priority but can only be done one task per day due to technical limitations. How do I work with such goals and what happens when they cannot be fully completed during one Sprint?
Just focus on completing tasks top to bottom and following their priority. If you cannot finish all the tasks in order to close the Product backlog item during one Sprint, move this item into the next Sprint and continue working on it until fully completed

3) Everyone in my Scrum team are sick, must be Measles. Can I do a Sprint planning meeting by myself?
Yes, although it is highly recommended to have as many of your team as possible present during the Sprint planning meeting. Always think ahead, prepare the next Sprint early while everyone is still healthy and accounted for. Don't wait for the last Sprint day to prepare for the next one

Scrum tales - part 1

Developers, Support and Analysis teams have now had their first experience working with new TFS Visual Studio Scrum 1.0 template and I'm seeing Product backlogs forming, Sprint planning meetings underway

I'm still getting good number of questions, both systems (Scrum) and operational (TFS), i.e.:
1) How do I initiate a Sprint when I have 5 large Product backlog items and all are of the same priority?
You can't because you don't know which one to take first - make sure you have Product backlog items all with different priority approved by Product owner before you initiate Sprint planning meeting. No two goals should have the same priority

2) What do I do when one Product backlog item cannot be worked on in parallel, can I start the next one in the Sprint?
Yes - you can organize internally in the team in order to achieve goals you committed to the best way possible

3) Can I add tasks that include waiting on other people/teams to send me something?
No - why would you commit to a goal that you don't know you can finish? Identify impediments early and resolve them before the next sprint. If there are unforeseen obstacles that show up during a sprint, add an Impediment work item to your Sprint backlog and it is now up to the ScrumMaster to close it ASAP. Until an Impediment is closed proceed working on tasks next in line

There are more use cases and I'm sure we'll encounter even more in the upcoming weeks. I'll put them up as they appear. Everyone should feel free to post back thoughts and questions