Wednesday, December 11, 2013

Actionable inter-team deliverable approvals

Every Scrum team has either asked other teams for help with deliverables review or has provided assistance by reviewing deliverables of other teams
This inter-team collaboration encouraged as it will in turn improve inter-team communication, speed of deliverable reviews and most importantly quality of deliverables

However before Product Owners can accept a deliverable as approved, we'll need to see that the team performing the review has actually invested time to provide constructive and actionable feedback, or we'll automatically assume that deliverable isn't to expected acceptance criteria

What does this mean in practice:

Scrum team being reviewed - help reviewers to help you:
   a) Explain to the review team the intent behind the deliverable
   b) Specify areas that must be reviewed in a structured way
   c) Note what kind of feedback you need in order to have the deliverable approved

Team performing the review - provide full information in your reply:
   a) What has been reviewed
   b) What should be modified/updated/clarified, or
   c) Why nothing should be modified/updated/clarified (less likely)
   d) Recommend whether the reviewed deliverable should be approved or not by PO
Why should a team spend time to review other team's deliverables if they are not direct stakeholders? Our Company values statement says that Everyone serves: "Treat everyone as a customer, internal or external, and make responsiveness part of your personal brand. Build effective working relationships with a focus on being positive, informative, actionable, and helpful."
 
Leave-me-alone type of replies such as "Approved", "We have nothing to add", "This is good", and similar won't count as reviews at all when doing Sprint review

Friday, September 27, 2013

Efficient customer support chain

To have an efficient chain of help from customer and all the way down to developers teams should make sure to not skip steps and seek shortcuts as they will prevent internal growth but also unwillingly consume more of team time on the long run making the support inefficient and time consuming
 
 
What does this mean in practice?
 
   1) Support - don't be a hero, you cannot isolate all bugs by yourselves as this will suck up your time and you won't be able to work on anything else that day; pass complex support cases down to QA to isolate and reproduce the issue when you define the problem itself; not passing cases down to QA will also deny them of the opportunity to grow hard skills and help you better/faster in the future
 
   2) Support - make sure you do define what the problem is about when the customer first contacts you; this means you shouldn't forward received customer email directly to QA if you don't know the answer immediately - try to at least define the problem before pushing it down the chain to QA to isolate and report a bug
 
   3) Support - some of you have tendencies to contact devs and discuss directly with them whether a customer report is a bug and whether or not devs can fix it easily; don't go to devs but go to QA instead because going to devs without a defined bug will only waste time for both teams
 
   4) QA - although you no longer have unattended assistance metric this doesn't mean you should contact devs for every single support case and definitely not repeatedly when similar issues show up - I can hear directly from devs which QA team is doing better job isolating technical support case bugs
 
   5) QA - you can and should contact customer directly when you need additional help to isolate a complex problem; contacting customer through Support will only waste time of both teams and prolong solving the case

   6) Devs - help QA grow by giving them pointers what to research, check and isolate; if you always roll down your sleeves, dig down into code and isolate a problem for them, you'll be stuck doing this indefinitely while we'll have consequential impact on our product releases

Friday, August 23, 2013

Clueless QA Q&A

Unless you're finding tons of bugs each day and feeling primeval hate toward high entropy and other teams' imperfect work, you'll want to check out some of the use-case based Q&A below

Q1: I found a bug but my team members say it is not actually a bug and that I shouldn't report it
A1: What would a customer say after encountering the same issue in production?
Just ignore your team members, report the bug and let product owner worry about resolving reported bug one way or another

Q2: Product issue I found obviously seems like High priority but I'm afraid that team members and/or management will think I'm just inflating my monthly found bugs score
A2: What would a customer say after encountering the same unfixed issue in production that you didn't elevate?
Set the priority guidelines and use common sense, but always err on the side of customers. Don't worry, if I see priority inflation you'll hear from me

Q3: I used our Product B to help me while we're testing Product A and I found a bug in Product B; I don't think I should report Product B bugs at this time
A3: What would a customer say after encountering the same unfixed issue in Product B?
I don't remember when I saw some product bugs reported outside of regular test rounds - remember that everything is always in testing: all products, entire website, keyboard you're typing on, chair you're sitting on, this blog post you're reading now

Q4: Support asked me to help out to isolate a customer reported bug but I don't think I should talk to customers as my English skills aren't so good
A4: What would the customer say if the bug isn't isolated and isn't fixed in the next product version?
Think about it - customer is giving you a free bug and they are usually High priority, all you need to do is ask for details

Q5: I don't like how our new product is designed because it is confusing and hard to use, however it must be how developers, support, management and God thought it would be best so I won't report this
A5: What would the customer say to the same confusing or hard to use design?
Raise such issues early as bugs, if you think something is wrong then it most certainly is

Q6: One of the product menu items doesn't seem to work, but this is less used part of the product so it doesn't deserve a Test Case because Test Cases should only result in High bugs
A6: What would the customer say after encountering unfixed issue in production because if was simply missed and not reported as there was no corresponding Test Case?
Note that customers don't give a damn about our internal Test Case rules
 
Q7: I'm concerned that the bug I found cannot be fixed by developers and it should be fixed by another team so I don't think I should report it
A7: What would a customer say after encountering the same issue in production?
If you're so concerned about developers, ask them if they'd like a coffee and a cake, or just a foot massage instead of a bug report; OR you can just forget entirely about who should fix a bug and focus on reporting the bug itself
 
If you didn't notice a pattern above how to resolve QA concerns, please contact me directly to organize a special training for you

Monday, July 8, 2013

Everyone strives 101

Recent latest addition to our company values statement says that Everyone strives meaning everyone should be productive, focused, driven, and motivated. What does this mean during your everyday work?
 
   1) Focus on results - always know your end objective, whether you are working on a sprint goal or you have an unplanned ad-hoc task to finish. If at a time you feel like you don't know where to start or what should be the next step to get to the finish line, stop for a second and remember your end objective (new product build, analysis recommendation sent, documentation written and published, etc.)
 
   2) Expend the effort / go down with a fight - for various reasons we all get into time crunches to deliver. If you're already under pressure, worst thing you can do is to give up by "knowing you cannot finish on time" or that your "core hours have passed"; I've heard this far too many times, everyone is suddenly an oracle (no pun intended) and prefer to give up rather than go down fighting.
Working a few more hours killed no one and most of the time it is just enough to achieve your goal; even if the goal isn't achieved on time, you'll be able to finish it more quickly in the next sprint / next day
 
   3) Use time efficiently / demonstrate consistent productivity - on the contrary to the popular belief that you must always work overtime in order to deliver, simply remove time spenders from your everyday work; distinguish activity from productivity, just ask yourself: "What would happen if what I'm doing right now never gets done?" - this will help you uncover time spending tasks that lead nowhere
 
   4) Prioritize effectively - this is easy once you get used to it; just focus on your priorities - top PBI in your sprint, High priority bug, High importance email, etc. and you can never be wrong. Working on a lower priority task just because it is easier or quicker to do is always wrong - this happened so many times and although you may have a deliverable in the end, it will never be the right one
 
   5) Identify risks early and work to overcome roadblocks - "It was blocked by XYZ" is the most common reason for not getting something done on time; again in most of the cases it could've been prevented if this same goal was better prioritized and raised early. To fix this just use your brainpower to think of risks and prioritize those tasks before you start working - using brainpower afterwards to think of an excuse is a waste of time
 
   6) Single task - focus on exactly one goal and complete one task at a time before moving on to the next one. Yes, it is that easy
 
   7) Focus on daily deliverables - if you're about to shutdown your workstation, ask yourself: "What all have I delivered today?" - if you're having hard time thinking of an answer, reconsider the shutdown
 
   8) Set expectations realistically and expend the effort required to meet them - in almost all cases you have the freedom set your own ETAs - these are sprint Task estimates defined by entire Scrum team on sprint start, or ad-hoc task ETAs you provide when answering an email. Still it happens that you "overcommit" making it the second most common reason for not getting something done; even if you do overcommit, it is your responsibility to kick it into a higher gear and achieve the objective, but also to learn from it and improve your estimates in the future

Tuesday, April 23, 2013

Scrum tales - Part 14 - Spike goals

Almost all teams currently sprinting have encountered predictable work they can plan ahead but with a twist: work cannot be estimated until the work itself starts which defies a Scrum principle that all sprints must be time-boxed and that all work should be estimated during sprint planning

In a relation to previous post validating addition of research tasks to sprints, let's start using a specific Scrum element going forward whenever a research is required in order to estimate work needed to provide a deliverable - a Spike goal

Spike goal is a PBI that will have a specific deliverable which is not necessarily what Product Owner needs but what the team needs in order to estimate another linked goal that will eventually lead to a concrete deliverable requested by the Product Owner

Each Spike goal must be estimated ahead and time-boxed during sprint planning. The team will only commit to a Spike goal on sprint start. After the Spike is done, related concrete goal will then be estimated and added to the sprint / committed to

Typical workflow involving a Spike goal:
   1) Product Owner requests a deliverable (PBI goal) that is new or unknown to the team and cannot be estimated based on past experience or currently available input data
   2) Team creates a corresponding Spike goal, links it to the base goal and makes sure that the Spike is just above the base goal priority-wise
   3) During Sprint planning, the team time-boxes the Spike to i.e. 2 days in conjunction with Product Owner's approval, and creates corresponding tasks and estimates; team is now committed only to the Spike goal but not yet to the base goal
   4) As soon as the Spike goal is done, team will create tasks for the base goal since now there will be enough input data to estimate tasks leading to the concrete goal deliverable
   5) Sprint is groomed to make room and/or insert now fully estimated base goal and the team commits to deliver the base goal by the end of the sprint

Depending on the Spike goal outcome, it may be clear that the ongoing sprint won't be enough to complete everything needed in order to deliver to the base goal - in such cases it is best to discuss with Product Owner splitting the base goal and committing to deliver at least one part of the goal in the current sprint

Thursday, March 28, 2013

Concise bugs for improved revenue

There's a whole chain of inefficiency spawned by each poorly qualified bug:
   1) QA reports a bug with a title not matching the actual issue close enough
   2) The bug slips through spot-checks with incorrect priority assigned due to inconclusive title
   3) Developers spend time to fix lower priority bug or miss fixing higher priority bug
   4) Support team cannot fix/define release note correctly and spends much time including getting help from QA
   5) Marketing team spends time copy-editing poor release note grammar or semantics
   6) Finally customers get lower quality release or delayed release with incomprehensive release notes

Solution - QA

Peer-reviews: if devs can peer-review code, QA can also peer-review new bugs
   a) Make pairs, i.e. team member 1-2 and team member 3-4
   b) Define review schedule - each day, just before sending test summary (this is up to you)
   c) Suggest corrections to your team pair about all bugs that don't look correct

Once we get the final summary, both the bug owner and the bug reviewer will be held responsible for poor bug title, incorrect priority, stds issues, etc.

Apply the same to new test cases

Solution - Support

As prime owners of release notes you shouldn't bow your head down and keep nagging while rewriting poor release notes
   a) For each release note you need to open a corresponding bug in order to rewrite it, write down the bug owner name
   b) Compile a simple rank list (leaderboard) 1, 2, 3, 4 - from most to least frequent bug owner who you needed to correct
   c) Include the leaderboard along with each corrected and approved release notes

Tuesday, February 12, 2013

Success is a science

The perfect candidate for an open position is an average one. Doesn't make any sense? Read on

A new employee is like a new product:
   A) We must do candidate testing (analysis) before the decision to go ahead, but no matter what we decide, results can be surprising. This is why the analysis must be focused/concise, as objective (scientific) as possible and most importantly fully comparable/relative between all the candidates
If subjectivity is still present in the end, it must be qualified with a quantifiable relative property for all candidates

   B) Outcome is unknown until we measure and see Growth Over Time - can be high ROI or can be a black hole with negative ROI. In the end we cannot know for certain what will happen no matter how extensive the analysis was, but we can make sure to have objective way to track GOT and again to compare it relatively between all hired candidates ("new products"). This is our main data point and is critical in any decision making process

   C) The best candidates can fail and the average ones can succeed; given a good system and large enough sample of average and easy to find candidates, success is inevitable. Our job is to take in all the objective data points and be decisive - exclusively select one of two options:
      a) Pivot - change the direction we're headed, part ways (drop/change further product development)
      b) Persevere - incentivize, keep on in the same direction with minimal changes that can again be measured when the time comes

   D) Do all the above in short and quick cycles; the sooner we succeed or fail, the shorter the cycle will be but either way we'll learn something new and be able to apply it in the next cycle

Monday, February 11, 2013

Pick up the rifle v2.0

Not sure who owns specific task, system functionality, who has a ball now? The simplest and usually the correct answer to this is You do!

A couple of things that all Scrum teams encounter is a task blocked by an external team - i.e. waiting for question feedback, waiting for review, waiting for approval, waiting for... Does this mean that the external team owns the task now? No - You still do
How to unblock such tasks? There's always a way, here are a few:
   a) Talk about or at least mention all blocked tasks within your team every day on Daily Scrum meeting
   b) Help ScrumMaster to summarize and report all blocked tasks to Product Owner after each Daily Scrum
   c) Make sure you have an ETA for reply from the external team and remind them early and often
   d) If not waiting for approval then attempt to workaround, try to resolve the task yourself vs. waiting indefinitely

Sprint cannot fail because a task was externally blocked because You own the task and your team headed with ScrumMaster have the ultimate responsibility to unblock it on time


What about Scrum unrelated system ownership - who owns support forum, who owns product source code, who owns posted defects? One would think these are all straightforward and implicit but let me still clarify by taking the latest as an example - ownership of all posted defects

Simply think of a posted defect the same as a QA owned Sprint task:
   A) All defects are posted by QA ("tasks" are created by the Scrum team itself); others teams may in some cases create a few defects (Support for customers, developers for engines only) but defect priority ("task" estimate) is provided or modified by the QA team
   B) Defect creator is current owner of the "task" and must follow through to the end - provide additional details, retest, close the defect
   C) If a defect owner is not available to update the defect ("task" is blocked), QA team have the ultimate responsibility to unblock it by taking the ownership vs. waiting indefinitely on external teams to act
   D) If there are defects created by Support and unverified, posted by former colleagues, made obsolete by new product versions, who's "sprint" will fail if they remain blocked indefinitely?

Wednesday, January 30, 2013

Scrum tales - Part 13 - Remaining work estimate

Sprints and Tasks are ownership of Scrum team members and as such you are directly responsible for maintaining their integrity. Product Owner is only concerned with specific Product Backlog Items (goals), to get final deliverables as expected, and won't interfere with the Sprint Tasks

As each Task is owned by a single Scrum team member, if the task is not done at the end of the day, that team member must update the estimate in hours on how much work remains. The estimate should be as real as possible and should answer a simple question "How many more hours do I need in order to complete this task?"

Q: Can the work hours estimate be unchanged or even go up after working on a task?
A: Yes, but in such cases if your daily burndown Actual line also goes up you need to provide an explanation

Q: Where do I track how many hours I spent on a specific Task?
A: You don't. We only need to know how much work remains and to have as objective estimate as possible

Q: What if my Task takes several days to complete?
A: Refactor the Task to create several smaller ones so you can complete at least one Task each day; don't end a workday without finishing at least one Task

Q: What if our Tasks are too complex and cannot be split into smaller Tasks?
A: Unless your Task is an elementary particle, it can be split

A good sprint is a sprint that has concise Tasks so that each Scrum team member can complete at least one Task each day and provide a Daily deliverable

Thursday, January 24, 2013

Scrum tales - Part 12 - Definition of Done

Common issue repeated in almost all Scrum teams is figuring out when a specific Product Backlog Item can be considered Done. What we haven't established but will be critical going forward is a commonly known and widely understood Definition of Done (DoD)


Let's focus first on what each goal / PBI should contain: Acceptance Criteria; this is a simple checklist explaining unambiguous deliverables required to consider the goal Done
Although each PBI is named so it can fit neatly in a straightforward question "Did you <PBI title here>?", answer is not always a simple Yes or No (providing a seam to exploit ;)) so it must be backed up by a more detailed acceptance criteria checklist

Who defines acceptance criteria?
Product Owners do. Scrum teams can help suggest goals based on Product Owner provided Epic stories, but Product Owner is directly responsible to define clear expectations for each PBI while grooming Product Backlog

How can we, the Scrum team, help?
Acceptance criteria is as important to a Scrum team as it is to Product Owner. By having clearly defined expectations it won't be possible for Product Owner to say: "Hey, you haven't delivered the product in a cardboard box which was obviously expected"
Just make sure that you are clear with all deliverable expectations while grooming the Product Backlog - ask your Product Owner for clarification on each unclear goal at this time. You cannot go back after you commit to PBIs in a new Sprint, Product Owner will expect to see deliverables in order to provide you with an OK in the end

What is the DoD then?
   1) Present deliverable to Product Owner as soon as it is ready
   2) Answer unambiguously Yes to all items in the acceptance criteria checklist
   3) Get an OK from Product Owner

If you followed through with the three steps above, feel free to mark a PBI Done

Tuesday, January 15, 2013

Production roadmap demystified

To escape the arbitrary process of determining production roadmap we need to instate a scientific and clear system to satisfy business needs and have unambiguous priorities for both products and their functionality. Production roadmap will be defined in few simple steps:
   1) Create empty slots for product releases per each developer team based on business (EPO) needs
   2) Perform Research and Analysis to determine clear priority list for new/existing product and their functionality
   3) Fill out the roadmap slots by following previously defined product/functionality priorities

Step 1 - empty slots creation
Owned by: COO with assistance of CEO/Sales

Business needs that guide creation of production roadmap empty slots:
   1. One possible release slot per core developer team per month (12 per team)
   2. 75% of empty slots are to be allocated for new High ROI products (9 "green slots" per team)
   3. Release one new major version for existing products each year

Step 2 - R&A rank lists
Owned by: DPD with assistance of other technical teams (Support, QA indirectly)

Singular system similar to what we use in HR - evaluate specific properties of each candidate, summarize, sort and create a clear rank list of recommendations to fill the "buckets"

We can distinguish 3 sets of properties* for 3 rank lists we need to fill out the empty Production roadmap slots:
   1. Possible new High ROI products (green slots)
   2. Existing products (purple slots)
   3. New/existing product changes/functionalities; prioritized list of features needed for each of the products we'll use to fill the allocated slots

*Specific properties for R&A product/functionality ranking will be detailed separately

Step 3 - filling the empty slots
Owned by: COO with assistance of DPD

With clearly defined business needs and R&A rank lists, filling the slots is objective process:
   1. Fill the green slots based on new High ROI products R&A list of priorities
   2. Fill the purple slots based on existing products R&A list of priorities
   3. Assign a minimum and maximum feature set to each product release; minimum feature set must be met, maximum only if there is enough time and resources allocated but not for the expense of delaying the release
   4. Proof the estimates with developer teams to establish a commitment of both sides: no changes on requirements side and no delays on developer side


Do's and Don'ts
   A) Do apply lean techniques for new releases: make each new product release as short as possible with minimum possible feature set needed to determine the success of the product; analyze first version results and make a clear decision to pivot or persevere

   B) Do plan and proof estimates conservatively to make Production roadmap predictable and solid for the entire year

   C) Do use priority pairs to determine the weights of business needs if something's got to break; i.e. New High ROI products > one major release for all existing products each year

   D) Don't assume available development resources are fixed but think outside of the box to satisfy all business needs

   E) Don't push back planned releases but instead cut out lower priority features to release on time

Wednesday, January 9, 2013

Product release notes - why do they matter

Everyone involved in pre or post-production is writing product release notes: developers, QA and Support. In the end release notes for all products appear consistent and are published following carefully constructed set of internal writing standards. Here's an autopilot plan listing release notes creation process in top-down priority by owners


Support

You are the direct owners of product release notes; they serve as proactive help and guidance for our customers by letting everyone know what we improved, changed and fixed in the new product releases. Having good release notes has multiple benefits:
   a) Satisfied existing customers
   b) More new customers
   c) Less need for later reactive support

This means that you have the final and ultimate responsibility to polish the release notes for all products before the release, make them consistent and easy to understand for all and push to production. To do so, use all the help you can from other teams (QA, developers) to clarify required technical details and rewrite the release notes as needed

Release notes standards are there to make all release notes consistent, not to create more bureaucratic overhead; if you see a way to improve it by making release notes simpler and easier for customers to understand - act on it now


QA

You write the most of the release notes from scratch as soon as Bugs are reported which basically makes you the main authors of 90% of all our release notes. Writing good release notes has multiple benefits:
   a) Release note text can also be directly used as Bug title - two flies with one swat
   b) Bug title matching well written release note will make developers understand bugs more easily and will pull on your sleeve less often
   c) Save time for Support which in turn allows them to resolve more reactive support issues, to forward less issues to you and thus save your own time


Developers

You are the first and the last line of defense - you consolidate all release notes for a new product release into one file on start and incorporate them back into product after Support team reviews them

Yes, you still have to write all enhancements and changes as no one else knows what has improved and changed in the product better than you. Additionally you have the responsibility to incorporate final release notes back into your product and to make sure formatting is consistent across all products

As Everyone Serves, your main job is to provide prompt technical assistance to other teams when they need to write a good and simple release note


Common suggestions

   1) When in doubt how to write a release note just put yourself in customer's shoes and see if an outsider will understand it after a single read
   2) Make all release notes as concise as possible - they are not help content but new product version proactive one-sentence guidelines
   3) Describe anything and everything that has changed and especially improved in new product releases, even if you accidentally improved performance by a few percent
   4) If not sure how to write a release note contact other teams for assistance - don't write a poor and inconclusive release note as it will eventually come back to you

Tuesday, January 8, 2013

Scrum tales - Part 11 - get back on track

Let's resolve two distinct sprint issues that have repeated several times in various Scrum teams:
   1) Team member who was counted on has left the team, got sick, had to work on new critical priority tasks not anticipated with the sprint = sprint fails
   2) All team members are accounted for and are working but the sprint burndown shows the team is late halfway through the sprint = sprint fails


1) What do we do when team resources change during a sprint?
Sprint grooming. This is an unofficial Scrum team meeting organized by ScrumMaster with a single purpose: accommodate the sprint tasks to change in Scrum team resources

Specifically if the team has lost a team member, organize a quick team meeting and see how many hours are taken away from the sprint starting from the current date until the end of the sprint, then just remove corresponding number of tasks from the sprint bottom up (lowest priority ones)

Similarly if the team has gained a new team member, organize a team meeting and see how many additional hours you have gained until the end of the sprint, then just append that many tasks to the sprint

   a) Don't change team Projected hours - leave the original burndown graph line intact for comparison as all changes will be immediately reflected in the Actual line
   b) Do mention to Product Owner in daily Scrum summary that the sprint has been groomed and how many hours have been taken away or added to the sprint


2) What do we do when we are obviously running late (above the Projected line) midway through the sprint?
Roll up your sleeves and burn some midnight oil to catch up ;) Remember that the whole team has taken part during the Sprint planning meeting to estimate work ahead and that you had plenty of chances to include research and analysis tasks to get better sense of work needed. There's plethora of reasons why you're behind but there's only one solution - switch into higher gear to catch up

   a) Don't perform sprint grooming in this case unless team resources have changed
   b) Do include detailed description to Product Owner in daily Scrum summary what you are planning to do to catch up and what have you done already

   c) Don't ignore #b ;) all daily Scrum summaries must include Catching up notes section if you are running late in the second half of the sprint