Photo

1 kilogram of Quality

Mesut Durukal

from Indeed (Japan)

About speaker

Mesut has 15+ years of experience in Industrial Automation, IoT platforms, SaaS/PaaS and Cloud Services, the Defense Industry, Autonomous Mobile Robots, and Embedded and Software applications.

About speakers company

https://www.indeed.com/

Abstracts

broad

We try to ensure the quality and our confidence in our deliverables. But how can measure the quality to make sure our goals are achieved? Let's figure out what are the key quality metrics and how we can measure them.

Motivation:
Nowadays, great rivalry exists in the software world. Everyone tries to reduce time to market and cost with a broad context of features. To improve the competence of our product, we try to add several compatibility and scalability specifications. But sometimes less is more. To deploy in 1 week with several issues, or deploy in 2 weeks with no major issues?

We're talking about the elements that sustain an output here: Time, Coverage, Cost, and Quality. A bit different from others, measuring quality is challenging.

Correlated to the quality, the performance of Quality teams is also under the spotlight:

* What are they really doing? What is the outcome of QA activities?
* Are they successful? What do the QA activities change?

Problems:
How can we measure the quality and the outcome of the QA teams/activities?

Solutions:
Firstly, we will talk about being ready for measurement. Unless we do not have proper tools or state transition definitions, we won't be able to analyze data. Then the big question is what to measure. We can list hundreds of metrics, but which of them are the best to get some insights? Once we know what to collect, the way to minimize manual effort is automation. After getting raw data, we are good to create fancy monitoring tools. Graphs and dashboards are great ways to provide visual pieces of evidence.

Eventually, last but definitely not least is the interpretation of the results. Numbers are great, but what can we understand from them? Are there any action items we need?

Wrapping up, a lifecycle of a monitoring activity looks like this:

* Customizing environments to enhance transparency/visibility
* Choose metrics: Proposed decision criteria
* Automate measurement
* Creating visuals
* Analysis

Results & Conclusion:
I present several good practices to perform a quality monitoring. But eventually, the conclusion we will have is: There is no way to measure quality! We can't say we have 1 kilogram, 5 units, and 10 meters of quality.
So, what are we talking about, then? Of course, measuring key metrics is still a good opportunity to get some insights. We can learn a lot from the bugs reported, such as

* Which components are the most prone to errors?
* What kind of testing activities should we improve?
* What are the most common root causes of the issues?
* What is the ratio of reopened bugs?

An important part of the talk is presenting not only the working metrics but also the ones which do not work! What does the number of bugs or the number of test cases mean? I won't ask you to forget about those, but I will convince you not to be obsessed with these numbers solely.

Finally, I will introduce a set of metrics, which is not commonly used: Emotional metrics. Since we are talking about the quality, it is not only the product itself, the way we are developing, and the people involved. We can’t build quality with unhappy teams or people. Thus, let’s talk about team satisfaction as well.

Takeaways
-----------------------
We will try to find answers to the questions:

* Which metrics can we track?
* How can we deal with dozens of metrics?
* How can we automate metrics collection?
* After getting data, what is next?

The talk was declined

other talks of this topic

Photo
An Intro to Kubernetes Hardening

Ayesha Kaleem

MBition GmbH

broad
Photo
Securing K8s: back and forth to RBAC Enforce

Roman Levkin

Exness

specific
Photo
K8s load testing at scale with k6-operator

Ant(on) Weiss

PerfectScale

specific