Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Define process for categorizing bugs into H, M, L severity #588

Closed
3 tasks
Tracked by #629 ...
jmcook1186 opened this issue Apr 9, 2024 · 3 comments
Closed
3 tasks
Tracked by #629 ...

Define process for categorizing bugs into H, M, L severity #588

jmcook1186 opened this issue Apr 9, 2024 · 3 comments
Assignees

Comments

@jmcook1186
Copy link
Contributor

jmcook1186 commented Apr 9, 2024

Sub of #651

What
Create a process with a rubric for categorizing bugs into low, medium, high categories to support bug triage.

Why

As a core developer I want to have a clear decision making rubric for categorising bugs. This will allow us to make better decisions about how bugs should be handled, faster, and provide more transparency to users.

Context
We want to apply a consistent bug assessment process in our bug triage calls. This requires us to discuss and agree a decision making rubric and then make it public via our if github repository. Ideally we also have a set of appropriate remediations for bugs in each category.

The following table shows a draft risk matrix for the various deficiencies that could be reported for IF. The likelihood and severity of each bug is scored out of five and their product is the overall risk score, with a maximum of 25. This could form the basis of a bug categorization process.

Consequence Likelihood Severity Risk score Remediation
Bugs in IF core unusable framework, incorrect calculations 4 5 20 Unit testing, integration testing, PR review process
Bugs in if-plugins core pathways fail, IF very limited in functionality 4 5 20 Unit testing, integration testing, PR review process
Bugs in if-unofficial Third party plugins harder to use, limits IF to standard lib 4 3 12 Unit testing, integration testing, PR review process, collaborations with external orgs
Bugs in template Harder to build plugins, ecosystem growth is impacted 4 2 8 testing, regularly reviewing template for consistency with latest IF
Bugs in docs product does not match expectation, hard to debug, frustration, loss of adoption 4 2 8 regular audits, ensure change PRs include relevant docs updates
Security flaw: privacy related leak user data, unlikely to achieve adoption in serious orgs 2 5 10 Include disclaimers about third party plugins, audit and test our plugins regularly, strict QA around community PRs to IF repos
Security flaw: permissions escalation expose user to malware 1 5 10 Include disclaimers about third party plugins, audit and test our plugins regularly, strict QA around community PRs to IF repos
Code not addressing user needs no product market fit, loss of adoption 3 5 15 Regular communication with users, open communication channels, strategic partnerships
Communication failures within team Conflicting or duplicating work, frustration, morale damage 3 4 12 Standups, processes around issue and PRs, internal slack channel, check-ins, 1-1s
Communication failures with community we lose product market fit, we do not have good community retention, reputational damage 3 3 9 implement processes around external comms, working-in-public processes, documentation, onboarding materials, community calls
Communication failures with leadership product does not meet business goals 2 3 6 regular sync calls, IF weekly calls
License compliance failures, including in supply chain (e.g. exposing privileged api responses for free via a plugin) legal risk 1 4 4 build with third party org agreement/review where possible, read licenses for third party products being integrated, get explicit sign-off where possible
Logistical failures such as missed deadlines users stuck on old versions, less trust, reputational damage 3 2 6 clear internal processes that include regular cadences for all key functions such as planning/retro, releases etc
Bugs affecting releases users stuck on old versions 3 2 6 maintain regular release schedule
Strategy failures no product market fit 2 2 4 hold regular meetings with ED to stay aligned

Prerequisites/resources
Bug reporting template should be updated( issue here)

Statement of Work

  • Propose and write up a decision matrix in if/contributing.md
  • Team has had an opportunity to review

Acceptance criteria

  • Bug categorization process is documented in contributions page on if.greensoftware.foundation and if/contributing.md (we should avoid two copies of how to contribute, one source of truth for everything, people can be pointed to the contributing.md file from the docs site)
@jmcook1186 jmcook1186 added this to IF Apr 9, 2024
@jmcook1186 jmcook1186 self-assigned this Apr 11, 2024
@jmcook1186 jmcook1186 added this to the Sprint 11 / QA1 milestone Apr 11, 2024
@jmcook1186 jmcook1186 moved this to Backlog in IF Apr 11, 2024
@jawache
Copy link
Contributor

jawache commented Apr 16, 2024

@jmcook1186

The SoW here as I see it is just:

  • Propose and write up a decision matrix in if/contributing.md
  • Team has had an opportunity to review

Acceptance criteria

  • Bug categorization process is documented in contributions page on if.greensoftware.foundation and if/contributing.md (we should avoid two copies of how to contribute, one source of truth for everything, people can be pointed to the contributing.md file from the docs site)

@jmcook1186
Copy link
Contributor Author

The table above is useful for defining our strategy - its about how likely a bug is to occur as well as its severity and the product determines where it is sensible for us to expend resources to protect the product.

However, for bugs that have actually already occurred the likelihood is nullified. Therefore, for the purposes of bug triage, we can use the severity column only. We evaluate a bug against the criteria in Column 1 and assign it a L, M or H tag depending on the severity score.

Severity 1 = L
Severity 2/3 = M
Severity 4/5 = H

The latest the labelling will be done is the next available bug triage call, which is now scheduled for 1600 UTC+1 on every Tuesday.

Adding this to the contribution guideline documentation and any further discussion can happen there.

@jmcook1186
Copy link
Contributor Author

Proposed this assessment rubric in #663:

Consequence Severity
Bugs in IF core leading to incorrect calculations unusable framework 5
Bugs in if-plugins leading to incorrect calculations core pathways fail, IF very limited in functionality 5
Bugs in if-unofficial-pluginsd leading to incorrect calculations Third party plugins harder to use, limits IF to standard lib 3
Bugs in template Harder to build plugins, ecosystem growth is impacted 2
Bugs in docs product does not match expectation, hard to debug, frustration, loss of adoption 2
Security flaw: privacy related leak user data, unlikely to achieve adoption in serious orgs 5
Security flaw: permissions escalation expose user to malware 5
Code not addressing user needs no product market fit, loss of adoption 5
Communication failures within team Conflicting or duplicating work, frustration, morale damage 4
Communication failures with community we lose product market fit, we do not have good community retention, reputational damage 3
Communication failures with leadership product does not meet business goals 3
License compliance failures, including in supply chain (e.g. exposing privileged api responses for free via a plugin) 4
Bugs affecting releases users stuck on old versions 2
Strategy failures no product market fit 2

The mapping of severity to label is as follows:

Severity Label
1 L
2 M
3 M
4 H
5 H

@zanete zanete closed this as completed Apr 25, 2024
@github-project-automation github-project-automation bot moved this from Ready to Done in IF Apr 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done
Development

No branches or pull requests

3 participants