Show simple item record

dc.contributor.authorHopgood, Adrian A.en
dc.contributor.authorMcQueen, T. A.en
dc.contributor.authorAllen, T. J.en
dc.contributor.authorTepper, J. A.en
dc.identifier.citationMcQueen, T. et al. (2005) Extracting finite structure from infinite language. Knowledge-Based Systems, 18(4-5), pp. 135-141.
dc.descriptionThis paper presents a novel unsupervised neural network model for learning the finite-state properties of an input language from a set of positive examples. The model is demonstrated to learn the Reber grammar perfectly from a randomly generated training set and to generalize to sequences beyond the length of those found in the training set. Crucially, it does not require negative examples. 30% of the tests yielded perfect grammar recognizers, compared with only 2% reported by other authors for simple recurrent networks. The paper was initially presented at AI-2004 conference where it won the Best Technical Paper award.en
dc.subjectRAE 2008
dc.subjectUoA 23 Computer Science and Informatics
dc.subjectartificial neural networks
dc.subjectgrammar induction
dc.subjectnatural language processing
dc.subjectself-organizing map
dc.subjectSTORM (Spatio Temporal Self-Organizing Recurrent Map)
dc.titleExtracting finite structure from infinite languageen
dc.researchgroupCentre for Computational Intelligence

Files in this item


There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record