Show simple item record

dc.contributor.authorHopgood, Adrian A.en
dc.contributor.authorMcQueen, T. A.en
dc.contributor.authorAllen, T. J.en
dc.contributor.authorTepper, J. A.en
dc.date.accessioned2008-11-24T13:24:17Z
dc.date.available2008-11-24T13:24:17Z
dc.date.issued2005-08-01en
dc.identifier.citationMcQueen, T. et al. (2005) Extracting finite structure from infinite language. Knowledge-Based Systems, 18(4-5), pp. 135-141.
dc.identifier.issn0950-7051en
dc.identifier.urihttp://hdl.handle.net/2086/196
dc.descriptionThis paper presents a novel unsupervised neural network model for learning the finite-state properties of an input language from a set of positive examples. The model is demonstrated to learn the Reber grammar perfectly from a randomly generated training set and to generalize to sequences beyond the length of those found in the training set. Crucially, it does not require negative examples. 30% of the tests yielded perfect grammar recognizers, compared with only 2% reported by other authors for simple recurrent networks. The paper was initially presented at AI-2004 conference where it won the Best Technical Paper award.en
dc.language.isoenen
dc.publisherElsevieren
dc.subjectRAE 2008
dc.subjectUoA 23 Computer Science and Informatics
dc.subjectartificial neural networks
dc.subjectgrammar induction
dc.subjectnatural language processing
dc.subjectself-organizing map
dc.subjectSTORM (Spatio Temporal Self-Organizing Recurrent Map)
dc.titleExtracting finite structure from infinite languageen
dc.typeArticleen
dc.identifier.doihttp://dx.doi.org/10.1016/j.knosys.2004.10.010en
dc.researchgroupCentre for Computational Intelligence


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record