WHIRGAM

Agile
2022-07-17

Yet another mnemonic acronym for Product Owners, their teams, and community.

As in many endeavours in life, the key to success often is asking good questions. This encourages thinking before doing. In a Product Owner role there is no shortage of people demanding action. There is always a flood of enhancement requests to be done. Running through a quick questionnaire before diving into making a change can safe a lot of time an ensure a better product is built.

There are numerous mnemonic acronyms out there to give a quick buzz word to this process. INVEST, PURE, and SMART are a few popular ones. There are reoccurring themes in many these. The important thing is to exact the ideas from them and take what you feel are the important concepts. Then have you, your team, and community consider (question) those points on each work item for the product.

So here is my mnemonic acronym: WHIRGAM. If you say it fast enough it kind of sounds like "We're Game". That's a good point to get to with the team before they start working on an issue.

  • Why - Why are we doing this?
  • Headache - What headache/problem are we trying to solve?
  • Impact - What is the impact of this on everyone?
  • Repeat - Are we repeating ourselves?
  • Grow - Can this solution grow? Does it have "good bones"?
  • Assignable - Can this be assigned to someone?
  • Maintainable - Can this be maintained?
Why - Why are we doing this?

This is probably the most important question. It is a little open-ended and ambiguous, but it is still a good question to ask.

If the answer to this is that "it is our contractual obligation to do this, and if we don't, we will get sued" or "the customer said jump, so the only thing I want to here is 'how high?'", then there is a problem. The work item will probably still get done. But they ability to have buy-in from the team, be collaborative, Agile, or happy is severely diminished.

Better sub-questions on this are:

  • Why not? (if your are in an experimental phase)
  • Is everyone excited to do this?
  • Is everyone excited to receive this?
  • How does this help tell the story of our product vision?
Headache - What headache/problem are we trying to solve

Before devising a solution, it is usually best to define problem first. Try to avoid have a solution that is looking for a problem; start with the problem.

The same goes for end users. Sometimes enhancement requests come in steeped in very detailed changes they want to the user interface or functionality of the backend system. It is very possible, if not the norm, for end users to request enhancements lacking any description of the problem they are trying to solve. This is only natural. Most people want to give solutions, not problems. But this does impede collaboration.

Define the problem, not the solution.

Do a walk through analysis of how end users are coping with this problem today. This helps build empathy, understanding, and comradery between the product team and the end users.

Impact - What is the impact of this on everyone?

Now that you know what the pain points of the problem are, compare how the quality of life of users will be improved after the solution is deployed. Then you can see the value of the work being deployed.

Most work items start with a single person requesting a change. But the product has many users. How will this solution impact other users? Does this fit with the grand vision of the product?

Will all/most users be as excited to receive the changes to the product as the user that first requested it? Will some users object? Do you need to a settings switch to enable or disable the feature? Should it be enabled or deactivated by default? The broader the appeal of the new feature, the greater its value.

Think about how this feature is going to be marketed. Is it Positively stated? It is hard to market something positively when you know it is a hacky thing done just for one customer that you aren't very proud of and want to hide from your other customers.

It shouldn't undercut or be critical of your previous work either. No software is perfect. Sometimes you know you are deploying something that isn't perfect. It is an antipattern called Gold Plating if you try to deploy perfect software. Sometimes you don't know a feature is going to a problem until after it is deployed. Hindsight is 20/20. Enhancement, bug fix, and pivot descriptions should not be critical of previous decisions. It only hurts the original customers, developers, or other team members that advocated for the feature, and it creates a toxic element in the community you are trying to build. Being critical of past work and denying the opportunity to fix it both encourage gold plating. Make sure you can put a positive spin on changes that foster pride the product, in the past, present, and future.

Repeat - Are we repeating ourselves?

The second rule of Simple Design is to not repeat yourself. It is a point that seems to come up in almost every blog post I write (how ironic). Well designed software shouldn't have two (or more) features that essentially do the same thing. It makes software expensive to build and even more expensive to maintain.

With new features, evaluate if a similar solution has already been done before. Or has the feature been discussed before but not implemented. Perhaps in the past a customer asked for a feature, but because just one customer was asking a different out-of-product work around was suggested. As more people ask, use cases can be made first class features of the product.

Sometimes this doesn't get spotted in planning or design. Sometimes it is the developers that realize they are fixing a similar bug or copy & pasting. When extracting duplicate code out into a single method or subsystem, you must give that new single source of truth a name. Sometimes only then you discover a new feature in the product. When developers notice this, they need to bring this to the attention of the product owner.

Also, do your market research. Maybe your product doesn't support this yet but do any of your competitors support this (either well or poorly).

Grow - Can this solution grow? Does it have "good bones"?

No software is, nor should be, perfect on its first release. But can the feature evolve?

Although one shouldn’t try to predict the future, do take a couple minutes to anticipate how the feature may evolve over time. Can you build on top of this to make the feature even better over time? Or are you painting your self into a corner? Does the feature strain the technical stack, or lead to inconsistencies with the product vision?

Eventually you may need to pivot away from this solution. Is that going to be possible? Once users have a feature and are accustomed to it, will they be able to pivot away from it? Or will this feature that has hit a wall have to be continuously supported and stretched beyond its capabilities?

Be careful with feature replacement projects. If there are already 14 features in the product to solve a problem and you plan to fix that by creating a new feature to replace them all, be sure that you aren’t just adding a 15th feature. This is one way knowledge silos are created in an organization; with each person building their own solution and not respecting the others. Instead, try to collaborate on the features that have already tried to solve the problem. Work to improve the existing feature. If it needs replaced, be sure that migrating users from the existing feature to the new one is part of the scope of work and definition of done.

Assignable - Can this be assigned to someone?

Is there a single person, or team lead, that this work item can be assigned to?

Do they have the time to do it? What task(s) are they already assigned that would have to be deprioritized to take on this new one?

Do they understand the domain problem that the business is trying to solve? Do they understand the technical challenges related to this feature? Will there be training time or hiring that is needed?

If there is no one to work on a feature, it is a waste of time discussing it.

Maintainable - Can this be maintained?

Is the feature testable? Testing is a large part of the maintenance cost of a feature. The easier and more automated it is to test the better.

Is the new feature easy to install? Is it easy to configure? Ideally the best configuration is no configuration at all. Installing the new version should "just work". There shouldn't be a huge install, configuration, or retraining effort.

The maintainability of the feature speaks to how well it is designed. Are there security or performance issues? Does it respect the current module responsibilities & boundaries in the system? Will the new feature be compatible with other existing features?