Assessment of the assessment: Evaluation of the model quality estimates in CASP10
KAUST Grant NumberKUKI1-012-43
MetadataShow full item record
AbstractThe article presents an assessment of the ability of the thirty-seven model quality assessment (MQA) methods participating in CASP10 to provide an a priori estimation of the quality of structural models, and of the 67 tertiary structure prediction groups to provide confidence estimates for their predicted coordinates. The assessment of MQA predictors is based on the methods used in previous CASPs, such as correlation between the predicted and observed quality of the models (both at the global and local levels), accuracy of methods in distinguishing between good and bad models as well as good and bad regions within them, and ability to identify the best models in the decoy sets. Several numerical evaluations were used in our analysis for the first time, such as comparison of global and local quality predictors with reference (baseline) predictors and a ROC analysis of the predictors' ability to differentiate between the well and poorly modeled regions. For the evaluation of the reliability of self-assessment of the coordinate errors, we used the correlation between the predicted and observed deviations of the coordinates and a ROC analysis of correctly identified errors in the models. A modified two-stage procedure for testing MQA methods in CASP10 whereby a small number of models spanning the whole range of model accuracy was released first followed by the release of a larger number of models of more uniform quality, allowed a more thorough analysis of abilities and inabilities of different types of methods. Clustering methods were shown to have an advantage over the single- and quasi-single- model methods on the larger datasets. At the same time, the evaluation revealed that the size of the dataset has smaller influence on the global quality assessment scores (for both clustering and nonclustering methods), than its diversity. Narrowing the quality range of the assessed models caused significant decrease in accuracy of ranking for global quality predictors but essentially did not change the results for local predictors. Self-assessment error estimates submitted by the majority of groups were poor overall, with two research groups showing significantly better results than the remaining ones.
CitationKryshtafovych A, Barbato A, Fidelis K, Monastyrskyy B, Schwede T, et al. (2013) Assessment of the assessment: Evaluation of the model quality estimates in CASP10. Proteins: Structure, Function, and Bioinformatics 82: 112–126. Available: http://dx.doi.org/10.1002/prot.24347.
SponsorsGrant sponsor: US National Institute of General Medical Sciences (NIGMS/NIH); Grant number: R01GM100482; Grant sponsor: KAUST Award; Grant numbers: KUKI1-012-43; EMBO.
PubMed Central IDPMC4406045
CollectionsPublications Acknowledging KAUST Support
- Methods of model accuracy estimation can help selecting the best models from decoy sets: Assessment of model accuracy estimations in CASP11.
- Authors: Kryshtafovych A, Barbato A, Monastyrskyy B, Fidelis K, Schwede T, Tramontano A
- Issue date: 2016 Sep
- Evaluation of model quality predictions in CASP9.
- Authors: Kryshtafovych A, Fidelis K, Tramontano A
- Issue date: 2011
- Evaluation of CASP8 model quality predictions.
- Authors: Cozzetto D, Kryshtafovych A, Tramontano A
- Issue date: 2009
- Evaluation of residue-residue contact prediction in CASP10.
- Authors: Monastyrskyy B, D'Andrea D, Fidelis K, Tramontano A, Kryshtafovych A
- Issue date: 2014 Feb
- Assessment of protein disorder region predictions in CASP10.
- Authors: Monastyrskyy B, Kryshtafovych A, Moult J, Tramontano A, Fidelis K
- Issue date: 2014 Feb