UNIVERSITY OF HERTFORDSHIRE COMPUTER SCIENCE RESEARCH COLLOQUIUM presents "When is a Software Defect not a Defect? Different Classifiers Find Different Defects although with Different Level of Consistency" Dr. David Bowes (School of Computer Science, University of Hertfordshire) 14 October 2015 (Wednesday) * * 10 am - 11 am * * Hatfield, College Lane Campus * * Seminar Room C408 * * Everyone is Welcome to Attend Refreshments will be available Abstract: BACKGROUND During the last 10 years hundreds of different defect prediction models have been published. The performance of the classifiers used in these models is reported to be similar with models rarely performing above the predictive performance ceiling of about 80% recall. OBJECTIVE We investigate the individual defects that four classifiers predict and analyse the level of prediction uncertainty produced by these classifiers. METHOD We perform a sensitivity analysis to compare the performance of Random Forest, Naive Bayes, RPart and SVM classifiers when predicting defects in 12 NASA data sets. The defect predictions that each classifier makes is captured in a confusion matrix and the prediction uncertainty is compared against different classifiers. RESULTS Despite similar predictive performance values for these four classifiers, each detects different sets of defects. Some classifiers are more consistent in predicting defects than others. CONCLUSIONS Our results confirm that a unique sub-set of defects can be detected by specific classifiers. However, while some classifiers are consistent in the predictions they make, other classifiers vary in their predictions. Classifier ensembles with decision making strategies not based on majority voting are likely to perform best. --------------------------------------------------- Hertfordshire Computer Science Research Colloquium http://cs-colloq.stca.herts.ac.uk