Measurable Reporting Through Issue Tracking

What it is

A set of useful reports that are easy to read and relevant to what customers want to know about their code. These reports can be customized by to reflect what the customer is most interested in. Important software quality questions Prism test reports answer are:
  1. What has changed in the code and the architecture in the current build vs. the previous build?
  2. How many new issues were introduced in the code between the last build and the current build? Are these issues trending up or down over the life of the build?
  3.  Does the code submitted for acceptance by the third party developer meet our SLA standards for that developer? What development activities are occurring in my organization right now? How many developers are interacting with the code? By maintenance? By new development? By rate of coding?
  4. On what applications do I need to focus my attention most? How productive is my development organization? What is the overall development capability of the team?
  5. Are we under resourced in terms of application portfolio vs. resources and if so, where?
  6. Based on the rate of code change, is the predicted delivery date for the application still feasible?
  7. What potential vulnerabilities are in the code? Which should be fixed immediately? Which are less important?
  8. Are assigned resources working on assigned tasks?
  9. Which applications are ready for a QA cycle?
  10. How much code is changing in response to issued defects discovered by the QA team?

Because of the issue data Prism collects and tracks, development management, QA and security get deeper visibility into what’s really going on in their applications.  

What it does

The most valuable assets of the Prism platform are the metrics that result from the analysis reports. Management wants answers to questions concerning software quality and safety. Providing the answers has been a problem for most companies since there has never been a consistently comparable set of enterprise-specific relevant data regarding quality from which management could make informed business decisions. Prism answers this metrics challenge with the ability to gather a wide array of consistent, objective and comparable data concerning the customer’s entire code base. Prism technology can track issues from build-to-build, suppress issues the analyst does not care about or analyze only what’s changed in the code since the previous run. This data forms the core for better/more relevant application-specific metrics for each Prism customer. 


Moreover, this information needs to be conveyed in a format that is designed for decision-making. Prism’s reports include: summaries; most problematic files; most seen issues; tracking data and graphs as well as comprehensive detail on all issues discovered by Prism. The reports can be linked to both an issue tracking system and the developer’s work environment so that clarity and consistency regarding code quality assurance can be pushed throughout the enterprise. Developers and software engineers get the appropriate assistance and information they need through Prism’s issue tracking system while summary reports provide managers access to the critical insight necessary to assess improvement in their software development processes.  


Figure 3 – Prism Metric and Trend Data

Why it matters

Without clear, consistent and comparable metrics, management cannot judge the effectiveness of its code quality assurance initiatives. Reports that may be great for developers who need to fix code are probably not great for managers who want to evaluate the readiness of code for QA or the issue trending data across hundreds of applications. Because Prism is flexible, it can be configured to the unique needs of each department and each customer.