What is Quality Intelligence?
The challenge of assessing the effectiveness of testing
Before diving into Quality intelligence, let’s talk about the testing effectiveness challenge. As teams place greater emphasis on ensuring software quality throughout the entire software development lifecycle through continuous and automated methods, it’s natural that various types of tests are conducted to ensure compliance with both functional and non-functional requirements.
As an increasing number of these test types are adopted to validate various aspects of the application under test, the number of different automated tools and automated tests tends to grow exponentially.
This creates a fragmented landscape of information, making it hard to consolidate and make sense of the disparate data.
Various testing tools produce distinct data points through their reporting mechanisms, leading to inconsistencies in formats and levels of detail.
Consequently, teams frequently find themselves unable to extract valuable insights from this data and to evaluate the effectiveness of testing efforts.
This challenge arises from the lack of a unified source of truth for assessing testing effectiveness.
The lack of standardization among different testing tools amplifies the issue
The problem is worsened by the lack of standardization among testing tools. That complicate the comparison of results and hinders effective integration of data from multiple sources.
Regardless of the circumstances, teams can overcome these obstacles. They can develop custom reporting engines or adopt commercial Business Intelligence (BI) tools to manage these tasks.
The true challenge lies in the fact that reporting takes factual data and presents it without adding judgment or insights.
The crux of the issue lies in finding the signal through the noise generated by the multitude of testing sources.
Teams need to be empowered to separate the relevant from the irrelevant, ultimately unlocking “Quality Intelligence”.
Finding signal in a noisy world
As discussed earlier, traditional reporting approaches typically present data in a static or fixed format, offering predefined summaries or metrics without generating in-depth analysis.
While this static view can be useful for quickly grasping essential information, it often lacks the depth needed to uncover nuanced relationships, hindering teams’ ability to extract and extrapolate useful knowledge.
These reports serve as snapshots of information at a particular point in time. That only offers a superficial understanding of the data without exploring underlying trends, patterns, or insights.
To address this, it’s essential to employ sophisticated methods. These turn raw data into valuable information for teams to assess testing efficiency.
In Data Science, Data Analytics is commonly utilized to delve deeper into data and reports. It aims to uncover insights and understand the underlying reasons behind the metrics and reports, often including making recommendations for action based on those insights.
In the Data Analytics field, there are four types of data analytics. They serve distinct purposes in extracting value from data, ranging from understanding past events to predicting future outcomes and prescribing actions to drive desired results, as outlined below.
Type of data analytics:
- Descriptive Analytics: Descriptive analytics reveals “what happened” and involves analyzing historical data to understand past performance and trends.
- Diagnostic Analytics: Diagnostic analytics addresses “why things happened”. It aims to determine why certain events occurred by analyzing historical data and pinpoint root causes of problems or anomalies.
- Predictive Analytics: Predictive analytics, as its name implies, helps understand “what will happen” by using statistical algorithms and machine learning techniques to forecast future trends and outcomes based on historical data.
- Prescriptive Analytics: Prescriptive Analytics help teams make decisions and determine “what actions to take next”. It is the most advanced type of Analytics. It utilizes AI and ML to process extensive data, simulate scenarios, and predict likely outcomes. This approach not only anticipates future events but also provides actionable guidance on how to respond to those predictions to achieve the best results.
Quality Intelligence to the rescue!
As observed in the previous section, the significance of Artificial Intelligence (AI) and Machine Learning (ML) in converting raw data into valuable insights cannot be overstated.
One of the core strengths of AI and ML lies in their ability to identify patterns and relationships within data that may be difficult or impossible for humans to discern. Moreover, they excel in leveraging historical data to forecast future outcomes.
These concepts are the fundamental building blocks of Quality Intelligence.
Unlike the traditional reporting produced by testing tools, which typically focuses on metrics related to test execution results, defects found, and basic requirement coverage, Quality Intelligence adopts a dynamic analytical approach empowered by all the existing advances in AI and ML to turn data into actionable knowledge.
It achieves this by reconciling large amounts of raw data produced throughout the entire software development lifecycle from disparate sources into a centralized source of truth. It promotes collaboration among all team members.
This method enables teams to fully harness fragmented data, aiming for consistent evaluation of testing efficiency.
Quality Intelligence unlocks a better understanding of the testing process. Consequently, teams can go beyond explaining what has happened (descriptive) and why it happened (diagnostic). They can also predict what might happen in the future (predictive). And finally, recommend actions to mitigate future risks by leveraging Prescriptive Analytics powered by AI and ML techniques.
AI-Powered Quality Intelligence: Gravity
Gravity is at the forefront of Quality Intelligence, offering a unified platform specifically designed to help testing teams monitor and leverage raw data produced from both testing and production environments, enhancing the testing efficiency.
Its primary function is to produce “Quality Intelligence” by processing the ingested data through ML algorithms. This involves translating raw data into meaningful insights using techniques such as pattern recognition, trend and correlation analysis, anomaly and outlier detection, and more.
Gravity sets itself apart from other tools in this space. It not only ingests and analyzes testing data from tools used in pre-production testing environments but also delves into production usage through the ingestion of Real User Monitoring (RUM) data.
Traditional RUM tools such as Datadog, AppDynamics, New Relic, and others are crafted for different purposes rather than generating Quality Intelligence.
They serve a range of teams including Developers, Performance Engineers, Site Reliability Engineers, and DevOps Engineers. But their capabilities may not align perfectly with testing needs, let alone generating Quality Intelligence.
Gravity’s ability to monitor production data points helps uncover an additional layer of insights into how the application is utilized by real-world users.
Such understanding facilitates the recognition of usage patterns, common user journeys, and frequently accessed features, effectively addressing gaps in test coverage that may arise from incomplete or poorly prioritized automated tests.
Here are some examples, though not exhaustive, illustrating how Gravity employs AI and ML to produce Quality Intelligence. The objective is to extract insights and recommendations from raw data automatically, going beyond conventional reporting capabilities:
Production Usage Analysis
AI and ML can analyze usage data from production environments to identify:
- common usage patterns,
- user workflows,
- and areas of the application that require more attention.
This analysis can inform test case selection and prioritization based on real-world usage patterns.
Impact Analysis
By leveraging historical data and machine learning techniques, AI analyze the impact of changes on different features, areas or end-to-end journeys within the application. This analysis helps in prioritizing testing efforts and identifying areas that require additional attention following a change.
Test Coverage and Gap Analysis
AI can assess test coverage by analyzing the relationship between test cases and usage paths. ML algorithms can identify features or user journeys that are under-tested or not covered by existing test suites. They guide the creation of additional tests to bridge the gaps.
Test Case Generation
AI can assist in automatically generating test cases by analyzing the steps taken by users from the usage data. Thus reducing the manual effort required for creating and maintaining test suites.
Release Readiness Assessment
AI and ML evaluate the quality of the application by analyzing various metrics from different data sources. Predictive models can predict the likelihood of defects in specific features or end-to-end journeys within the application. It provides insights into the readiness of the application for release.
Conclusion: Quality Intelligence
Quality Intelligence is still in its infancy, with new tools emerging frequently in the market. By harnessing the latest advancements in AI and ML, it aims to optimize software testing efforts by replacing traditional reporting approaches, ensuring alignment with DevOps principles, and actively contributing to the delivery of high-quality software.