Show simple item record

dc.contributor.authorRafferty, Josephen
dc.contributor.authorNugent, Chrisen
dc.contributor.authorLiu, Junen
dc.contributor.authorChen, Limingen
dc.date.accessioned2017-03-21T10:51:16Z
dc.date.available2017-03-21T10:51:16Z
dc.date.issued2015-08-08
dc.identifier.citationRafferty, J. et al. (2015) Automatic Metadata Generation Through Analysis of Narration Within Instructional Videos. Journal of Medical Systems, 39 (9): 94:1-94:7en
dc.identifier.issn0148-5598
dc.identifier.urihttp://hdl.handle.net/2086/13778
dc.descriptionThe file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.en
dc.description.abstractCurrent activity recognition based assistive living solutions have adopted relatively rigid models of inhabitant activities. These solutions have some deficiencies associated with the use of these models. To address this, a goal-oriented solution has been proposed. In a goal-oriented solution, goal models offer a method of flexibly modelling inhabitant activity. The flexibility of these goal models can dynamically produce a large number of varying action plans that may be used to guide inhabitants. In order to provide illustrative, video-based, instruction for these numerous actions plans, a number of video clips would need to be associated with each variation. To address this, rich metadata may be used to automatically match appropriate video clips from a video repository to each specific, dynamically generated, activity plan. This study introduces a mechanism of automatically generating suitable rich metadata representing actions depicted within video clips to facilitate such video matching. This performance of this mechanism was evaluated using eighteen video files; during this evaluation metadata was automatically generated with a high level of accuracy.en
dc.publisherSpringer USen
dc.subjectAssistive livingen
dc.subjectAutomated speech recognitionen
dc.subjectMetadataen
dc.subjectOntologyen
dc.subjectParsingen
dc.subjectSmart environmentsen
dc.subjectVideoen
dc.titleAutomatic Metadata Generation Through Analysis of Narration Within Instructional Videosen
dc.typeArticleen
dc.identifier.doihttp://dx.doi.org/10.1007/s10916-015-0295-2
dc.peerreviewedYesen
dc.funderN/Aen
dc.projectidN/Aen
dc.cclicenceCC-BY-NC-NDen
dc.date.acceptance2015-05-08en
dc.researchinstituteCyber Technology Institute (CTI)en


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record