Research & White Papers

Below are short summaries and links to research and white papers regarding information on software testing. Reflective believes that safer software happens when everyone contributes something. If you have research, a white paper or an article of interest and would like to share it with the Reflective community, click here to contribute. (e-mail link to 


A Nasa Software Quality Model
This paper explains a Software Quality Model and then uses it as a basis for discussions on quality attributes and risks. Risks that can be determined by a metrics program are identified and classified. The software product quality attributes are defined and related to the risks. Specific quality attributes are selected based on their importance to the project and their ability to be quantified. The risks and quality attributes are used to derive a core set of metrics relating to the development process and the products, such as requirement and design documents, code and test plans. Measurements for each metric are defined and their usability and applicability discussed.

Applying and Interpreting Object-oriented Metrics
Software Assurance Technology Center (SATC) at NASA Goddard Space Flight Center discusses its approach to choosing metrics for a project by first identifying the attributes associated with object oriented development. Within this framework, nine metrics for object oriented are selected. These metrics include three traditional metrics adapted for an object oriented environment, and six "new" metrics to evaluate the principle object oriented structures and concepts. The metrics are first defined, then using a very simplistic object oriented example, the metrics are applied. Interpretation guidelines are then discussed and data from NASA projects are used to demonstrate the application of the metrics.

Beyond Kloc: Mining Software Metrics
an empirical study of the post-release defect history of five Microsoft software systems, we found that failure-prone software entities are statistically correlated with code complexity measures. However, there is no single set of complexity metrics that could act as a universally best defect predictor. Using principal component analysis on the code metrics, we built regression models that accurately predict the likelihood of post-release defects for new entities. The approach can easily be generalized to arbitrary projects; in particular, predictors obtained from one project can also be significant for new, similar projects.

Emergent Chaos
Reflective CTO Adam Shostack's Blog on security, privacy, economics and the occasional pink bunny.

Testing and Java Technology
Explores the basic theories, principles, phases, categories, and tool types currently being used in software testing at Sun, as well as testing issues specific to the world of Java technology.


Finding Source Code Quality and Security Issues Faster
A general technical overview of the multi-algorithm assessment technology used in Reflective's Prism 1000 Analyzer.