Show simple item record

dc.contributor.authorAckland, Stephen Marc
dc.date.accessioned2018-04-09T13:53:37Z
dc.date.available2018-04-09T13:53:37Z
dc.date.issued2017-04
dc.identifier.urihttp://hdl.handle.net/2086/15859
dc.description.abstractThe eyes are one of the most expressive non-verbal tools a person has and they are able to communicate a great deal to the outside world about the intentions of that person. Being able to decipher these communications through robust and non-intrusive gaze tracking techniques is increasingly important as we look toward improving Human-Computer Interaction (HCI). Traditionally, devices which are able to determine a user's gaze are large, expensive and often restrictive. This work investigates the prospect of using common mobile devices such as tablets and phones as an alternative means for obtaining a user's gaze. Mobile devices now often contain high resolution cameras, and their ever increasing computational power allows increasingly complex algorithms to be performed in real time. A mobile solution allows us to turn that device into a dedicated portable gaze-tracking device for use in a wide variety of situations. This work specifically looks at where the challenges lie in transitioning current state-of-the-art gaze methodologies to mobile devices and suggests novel solutions to counteract the specific challenges of the medium. In particular, when the mobile device is held in the hands fast changes in position and orientation of the user can occur. In addition, since these devices lack the technologies typically ubiquitous to gaze estimation such as infra-red lighting, novel alternatives are required that work under common everyday conditions. A person's gaze can be determined through both their head pose as well as the orientation of the eye relative to the head. To meet the challenges outlined a geometric approach is taken where a new model for each is introduced that by design are completely synchronised through a common origin. First, a novel 3D head-pose estimation model called the 2.5D Constrained Local Model (2.5D CLM) is introduced that directly and reliably obtains the head-pose from a monocular camera. Then, a new model for gaze-estimation is introduced -- the Constrained Geometric Binocular Model (CGBM), where the visual ray representing the gaze from each eye is jointly optimised to intersect a known monitor plane in 3D space. The potential for both is that the burden of calibration is placed on the camera and monitor setup, which on mobile devices are fixed and can be determined during factory construction. In turn, the user requires either no calibration or optionally a one-time estimation of the visual offset angle. This work details the new models and specifically investigates their applicability and suitability in terms of their potential to be used on mobile platforms.en
dc.language.isoenen
dc.publisherDe Montfort Universityen
dc.subjectHead-poseen
dc.subjectbinocular eye-gazeen
dc.subjectmobileen
dc.subject2.5D CLMen
dc.subjectCGBMen
dc.titleEmbedded Eye-Gaze Tracking On Mobile Devicesen
dc.typeThesis or dissertationen
dc.type.qualificationlevelDoctoralen
dc.type.qualificationnamePhDen


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record