Skip to main content

Research Repository

Advanced Search

Are grammatical representations useful for learning from biological sequence data?— a case study

Muggleton, SH; Bryant, CH; Srinivasan, A; Whittaker, A; Topp, S; Rawlings, C

Are grammatical representations useful for learning from biological sequence data?— a case study Thumbnail


Authors

SH Muggleton

A Srinivasan

A Whittaker

S Topp

C Rawlings



Abstract

This paper investigates whether Chomsky-like grammar representations are useful for learning cost-effective, comprehensible predictors of members of biological sequence families. The Inductive Logic Programming (ILP) Bayesian approach to learning from positive examples is used to generate a grammar for recognising a class of proteins known as human neuropeptide precursors (NPPs). Collectively, five of the co-authors of this paper, have extensive expertise on NPPs and general bioinformatics methods. Their motivation for generating a NPP grammar was that none of the existing bioinformatics methods could provide sufficient cost-savings during the search for new NPPs. Prior to this project experienced specialists at SmithKline Beecham had tried for many months to hand-code such a grammar but without success. Our best predictor makes the search for novel NPPs more than 100 times more efficient than randomly selecting proteins for synthesis and testing them for biological activity. As far as these authors are aware, this is both the first biological grammar learnt using ILP and the first real-world scientific application of the ILP Bayesian approach to learning from positive examples. A group of features is derived from this grammar. Other groups of features of NPPs are derived using other learning strategies. Amalgams of these groups are formed. A recognition model is generated for each amalgam using C4.5 and C4.5rules and its performance is measured using both predictive accuracy and a new cost function, Relative Advantage (RA). The highest RA was achieved by a model which includes grammar-derived features. This RA is significantly higher than the best RA achieved without the use of the grammar-derived features. Predictive accuracy is not a good measure of performance for this domain because it does not discriminate well between NPP recognition models: despite covering varying numbers of (the rare) positives, all the models are awarded a similar (high) score by predictive accuracy because they all exclude most of the abundant negatives.

Citation

Muggleton, S., Bryant, C., Srinivasan, A., Whittaker, A., Topp, S., & Rawlings, C. (2001). Are grammatical representations useful for learning from biological sequence data?— a case study. Journal of Computational Biology, 8(5), 493-521. https://doi.org/10.1089/106652701753216512

Journal Article Type Article
Publication Date Oct 1, 2001
Deposit Date Feb 10, 2015
Publicly Available Date Apr 5, 2016
Journal Journal of Computational Biology
Print ISSN 1066-5277
Publisher Mary Ann Liebert
Peer Reviewed Peer Reviewed
Volume 8
Issue 5
Pages 493-521
DOI https://doi.org/10.1089/106652701753216512
Keywords Bioinformatics, Machine Learning, Inductive Logic Programming, Cost Function, Grammatical Inference
Publisher URL http://dx.doi.org/10.1089/106652701753216512
Related Public URLs http://www.liebertpub.com/
http://www.salford.ac.uk/computing-science-engineering/cse-academics/chris-bryant
Additional Information Funders : SmithKlineBeecham
Projects : Using Machine Learning to Discover Diagnostic Sequence Motifs

Files





You might also like



Downloadable Citations