Real-Time Activity Prediction: A Gaze-Based Approach for Early Recognition of Pen-Based Interaction Tasks

Aus de_evolutionary_art_org
Version vom 2. November 2015, 21:39 Uhr von Gubachelier (Diskussion | Beiträge) (Bibtex)

(Unterschied) ← Nächstältere Version | Aktuelle Version (Unterschied) | Nächstjüngere Version → (Unterschied)
Wechseln zu: Navigation, Suche

Reference

Çagla Çıg and Tevfik Metin Sezgin: Real-Time Activity Prediction: A Gaze-Based Approach for Early Recognition of Pen-Based Interaction Tasks. In: Computational Aesthetics 2015 SBIM'15, 59-65.

DOI

http://dx.doi.org/10.2312/exp.20151179

Abstract

Recently there has been a growing interest in sketch recognition technologies for facilitating human-computer interaction. Existing sketch recognition studies mainly focus on recognizing pre-defined symbols and gestures. However, just as there is a need for systems that can automatically recognize symbols and gestures, there is also a pressing need for systems that can automatically recognize pen-based manipulation activities (e.g. dragging, maximizing, minimizing, scrolling). There are two main challenges in classifying manipulation activities. First is the inherent lack of characteristic visual appearances of pen inputs that correspond to manipulation activities. Second is the necessity of real-time classification based upon the principle that users must receive immediate and appropriate visual feedback about the effects of their actions. In this paper (1) an existing activity prediction system for pen-based devices is modified for real-time activity prediction and (2) an alternative time-based activity prediction system is introduced. Both systems use eye gaze movements that naturally accompany pen-based user interaction for activity classification. The results of our comprehensive experiments demonstrate that the newly developed alternative system is a more successful candidate (in terms of prediction accuracy and early prediction speed) than the existing system for real-time activity prediction. More specifically, midway through an activity, the alternative system reaches 66% of its maximum accuracy value (i.e. 66% of 70.34%) whereas the existing system reaches only 36% of its maximum accuracy value (i.e. 36% of 55.69%).

Extended Abstract

Bibtex

@inproceedings{Cig:2015:RAP:2810210.2810216,
author = {\c{C}\i\u{g}, \c{C}a\u{g}la and Sezgin, Tevfik Metin},
title = {Real-time Activity Prediction: A Gaze-based Approach for Early Recognition of Pen-based Interaction Tasks},
booktitle = {Proceedings of the Workshop on Sketch-Based Interfaces and Modeling},
series = {SBIM '15},
year = {2015},
location = {Istanbul, Turkey},
pages = {59--65},
numpages = {7},
url = {http://dl.acm.org/citation.cfm?id=2810210.2810216 http://de.evo-art.org/index.php?title=Real-Time_Activity_Prediction:_A_Gaze-Based_Approach_for_Early_Recognition_of_Pen-Based_Interaction_Tasks },
acmid = {2810216},
publisher = {Eurographics Association},
address = {Aire-la-Ville, Switzerland, Switzerland},
keywords = {eager activity recognition, feature extraction, gaze-based interaction, multimodal interaction, proactive interfaces, sketch recognition, sketchbased interaction},
}

Used References

1 James Arvo , Kevin Novins, Fluid sketches: continuous recognition and morphing of simple hand-drawn shapes, Proceedings of the 13th annual ACM symposium on User interface software and technology, p.73-80, November 06-08, 2000, San Diego, California, USA http://dl.acm.org/citation.cfm?id=354413&CFID=724111209&CFTOKEN=48939661 http://dx.doi.org/10.1145/354401.354413

2 {Ble13} Bleicher A.: Eye-tracking software goes mobile. Website, 2013. http://spectrum.ieee.org/computing/software/eyetracking-software-goes-mobile/. 2

3 Andreas Bulling , Daniel Roggen , Gerhard Troester, What's in the Eyes for Context-Awareness?, IEEE Pervasive Computing, v.10 n.2, p.48-57, April 2011 http://dl.acm.org/citation.cfm?id=1978347&CFID=724111209&CFTOKEN=48939661 http://dx.doi.org/10.1109/MPRV.2010.49

4 François Courtemanche , Esma Aïmeur , Aude Dufresne , Mehdi Najjar , Franck Mpondo, Activity recognition using eye-gaze movements and traditional interactions, Interacting with Computers, v.23 n.3, p.202-213, May, 2011 http://dl.acm.org/citation.cfm?id=1994156&CFID=724111209&CFTOKEN=48939661 http://dx.doi.org/10.1016/j.intcom.2011.02.008

5 {ÇS15} Çiğ Ç., Sezgin T. M.: Gaze-based prediction of pen-based virtual interaction tasks. International Journal of Human-Computer Studies 73 (2015), 91--106. http://dx.doi.org/10.1016/j.ijhcs.2014.09.005. 2, 3

6 {DM10} Dunham M., Murphy K.: Probabilistic modeling toolkit for matlab/octave, version 3. Website, 2010. https://github.com/probml/pmtk3/. 4

7 Tracy Hammond , Randall Davis, LADDER, a sketching language for user interface developers, Computers and Graphics, v.29 n.4, p.518-532, August, 2005 http://dl.acm.org/citation.cfm?id=1652714&CFID=724111209&CFTOKEN=48939661 http://dx.doi.org/10.1016/j.cag.2005.05.005

8 Levent Burak Kara , Thomas F. Stahovich, Hierarchical parsing and recognition of hand-sketched diagrams, Proceedings of the 17th annual ACM symposium on User interface software and technology, October 24-27, 2004, Santa Fe, NM, USA http://dl.acm.org/citation.cfm?id=1029636&CFID=724111209&CFTOKEN=48939661 http://dx.doi.org/10.1145/1029632.1029636

9 {LMS08} Liu P., Ma L., Soong F. K.: Prefix tree based auto-completion for convenient bi-modal chinese character input. In IEEE International Conference on Acoustics, Speech and Signal Processing (2008), pp. 4465--4468. http://dx.doi.org/10.1109/ICASSP.2008.4518647

10 Donald A. Norman, The Design of Everyday Things, Basic Books, Inc., New York, NY, 2002 http://dl.acm.org/citation.cfm?id=2187809&CFID=724111209&CFTOKEN=48939661

11 {NWV08} Niels R. M. J., Willems D. J. M., Vuurpijl L. G.: The nicicon database of handwritten icons. In Proceedings of the 11th International Conference on the Frontiers of Handwriting Recognition (2008), pp. 296--301. 2

12 Tom Y. Ouyang , Randall Davis, A visual approach to sketched symbol recognition, Proceedings of the 21st international jont conference on Artifical intelligence, p.1463-1468, July 11-17, 2009, Pasadena, California, USA http://dl.acm.org/citation.cfm?id=1661680&CFID=724111209&CFTOKEN=48939661

13 Dean Rubine, Specifying gestures by example, ACM SIGGRAPH Computer Graphics, v.25 n.4, p.329-337, July 1991 http://dl.acm.org/citation.cfm?id=122753&CFID=724111209&CFTOKEN=48939661 http://dx.doi.org/10.1145/127719.122753

14 Ben Steichen , Giuseppe Carenini , Cristina Conati, User-adaptive information visualization: using eye gaze data to infer visualization tasks and user cognitive abilities, Proceedings of the 2013 international conference on Intelligent user interfaces, March 19-22, 2013, Santa Monica, California, USA http://dl.acm.org/citation.cfm?id=2449439&CFID=724111209&CFTOKEN=48939661 http://dx.doi.org/10.1145/2449396.2449439

15 {SW14} Schmidt M., Weber G.: Prediction of multi-touch gestures during input. In Human-Computer Interaction. Advanced Interaction Modalities and Techniques, vol. 8511 of Lecture Notes in Computer Science. Springer International Publishing, 2014, pp. 158--169. http://dx.doi.org/10.1007/978-3-319-07230-2_16

16 Caglar Tirkaz , Berrin Yanikoglu , T. Metin Sezgin, Sketched symbol recognition with auto-completion, Pattern Recognition, v.45 n.11, p.3926-3937, November, 2012 http://dl.acm.org/citation.cfm?id=2264258&CFID=724111209&CFTOKEN=48939661 http://dx.doi.org/10.1016/j.patcog.2012.04.026


Links

Full Text

http://www.researchgate.net/profile/Cagla_Cig/publication/276272037_Real-Time_Activity_Prediction_A_Gaze-Based_Approach_for_Early_Recognition_of_Pen-Based_Interaction_Tasks/links/555452eb08ae6943a86f4999.pdf?inViewer=true&pdfJsDownload=true&disableCoverPage=true&origin=publication_detail

intern file

Sonstige Links

http://dl.acm.org/citation.cfm?id=2810210.2810216&coll=DL&dl=GUIDE&CFID=724111209&CFTOKEN=48939661