Prism is designed around four basic functions. The user accesses Prism through a browser-based Administration Console (1). The Administration Console interacts with the Analyzer Manager (2) which governs the testing process and procedures. Based on the type of code and the desired tests, the Analyzer Manager invokes Prism’s suite of testing applications. As the testing applications begin their analyses, they draw information from Prism’s issue database (3). When the tests are completed, Prism updates and stores the test results from each run in the Prism Knowledgebase. Once tests are completed, Prism generates a set of reports for the user to review. (4) From here, the user can store, review and manage the Prism output as necessary.
Figure 4 - Prism Architecture
Projects in process are tracked and always available until the testing engineer closes the job. The engineer can pick and choose what tests to run, what rules to incorporate, what issues to suppress or highlight and what depth of testing intensity to use. The quality assurance or applications development group can define standard testing procedures in Prism that mandate a certain set of rules and metrics be generated for each application in development. Metrics, summary and detailed reports including graphs of all relevant testing data are stored with the project and always accessible. Reports can be easily configured to the needs of the testing engineer, development group or management team.