Real-Time Activity Prediction: A Gaze-Based Approach for Early Recognition of Pen-Based Interaction Tasks: Unterschied zwischen den Versionen

Aus de_evolutionary_art_org
Wechseln zu: Navigation, Suche
(Die Seite wurde neu angelegt: „== Reference == Çagla Çıg and Tevfik Metin Sezgin: Real-Time Activity Prediction: A Gaze-Based Approach for Early Recognition of Pen-Based Interaction Tas…“)
Zeile 21: Zeile 21:
  numpages = {7},
  numpages = {7},
  url = {},
  url = {},
url = {},
  acmid = {2810216},
  acmid = {2810216},
  publisher = {Eurographics Association},
  publisher = {Eurographics Association},

Version vom 30. Oktober 2015, 12:09 Uhr


Çagla Çıg and Tevfik Metin Sezgin: Real-Time Activity Prediction: A Gaze-Based Approach for Early Recognition of Pen-Based Interaction Tasks. In: Computational Aesthetics 2015 SBIM'15, 59-65.



Recently there has been a growing interest in sketch recognition technologies for facilitating human-computer interaction. Existing sketch recognition studies mainly focus on recognizing pre-defined symbols and gestures. However, just as there is a need for systems that can automatically recognize symbols and gestures, there is also a pressing need for systems that can automatically recognize pen-based manipulation activities (e.g. dragging, maximizing, minimizing, scrolling). There are two main challenges in classifying manipulation activities. First is the inherent lack of characteristic visual appearances of pen inputs that correspond to manipulation activities. Second is the necessity of real-time classification based upon the principle that users must receive immediate and appropriate visual feedback about the effects of their actions. In this paper (1) an existing activity prediction system for pen-based devices is modified for real-time activity prediction and (2) an alternative time-based activity prediction system is introduced. Both systems use eye gaze movements that naturally accompany pen-based user interaction for activity classification. The results of our comprehensive experiments demonstrate that the newly developed alternative system is a more successful candidate (in terms of prediction accuracy and early prediction speed) than the existing system for real-time activity prediction. More specifically, midway through an activity, the alternative system reaches 66% of its maximum accuracy value (i.e. 66% of 70.34%) whereas the existing system reaches only 36% of its maximum accuracy value (i.e. 36% of 55.69%).

Extended Abstract


author = {\c{C}\i\u{g}, \c{C}a\u{g}la and Sezgin, Tevfik Metin},
title = {Real-time Activity Prediction: A Gaze-based Approach for Early Recognition of Pen-based Interaction Tasks},
booktitle = {Proceedings of the Workshop on Sketch-Based Interfaces and Modeling},
series = {SBIM '15},
year = {2015},
location = {Istanbul, Turkey},
pages = {59--65},
numpages = {7},
url = {},
url = {},
acmid = {2810216},
publisher = {Eurographics Association},
address = {Aire-la-Ville, Switzerland, Switzerland},
keywords = {eager activity recognition, feature extraction, gaze-based interaction, multimodal interaction, proactive interfaces, sketch recognition, sketchbased interaction},

Used References

1 James Arvo , Kevin Novins, Fluid sketches: continuous recognition and morphing of simple hand-drawn shapes, Proceedings of the 13th annual ACM symposium on User interface software and technology, p.73-80, November 06-08, 2000, San Diego, California, USA

2 {Ble13} Bleicher A.: Eye-tracking software goes mobile. Website, 2013. 2

3 Andreas Bulling , Daniel Roggen , Gerhard Troester, What's in the Eyes for Context-Awareness?, IEEE Pervasive Computing, v.10 n.2, p.48-57, April 2011

4 François Courtemanche , Esma Aïmeur , Aude Dufresne , Mehdi Najjar , Franck Mpondo, Activity recognition using eye-gaze movements and traditional interactions, Interacting with Computers, v.23 n.3, p.202-213, May, 2011

5 {ÇS15} Çiğ Ç., Sezgin T. M.: Gaze-based prediction of pen-based virtual interaction tasks. International Journal of Human-Computer Studies 73 (2015), 91--106. 2, 3

6 {DM10} Dunham M., Murphy K.: Probabilistic modeling toolkit for matlab/octave, version 3. Website, 2010. 4

7 Tracy Hammond , Randall Davis, LADDER, a sketching language for user interface developers, Computers and Graphics, v.29 n.4, p.518-532, August, 2005

8 Levent Burak Kara , Thomas F. Stahovich, Hierarchical parsing and recognition of hand-sketched diagrams, Proceedings of the 17th annual ACM symposium on User interface software and technology, October 24-27, 2004, Santa Fe, NM, USA

9 {LMS08} Liu P., Ma L., Soong F. K.: Prefix tree based auto-completion for convenient bi-modal chinese character input. In IEEE International Conference on Acoustics, Speech and Signal Processing (2008), pp. 4465--4468.

10 Donald A. Norman, The Design of Everyday Things, Basic Books, Inc., New York, NY, 2002

11 {NWV08} Niels R. M. J., Willems D. J. M., Vuurpijl L. G.: The nicicon database of handwritten icons. In Proceedings of the 11th International Conference on the Frontiers of Handwriting Recognition (2008), pp. 296--301. 2

12 Tom Y. Ouyang , Randall Davis, A visual approach to sketched symbol recognition, Proceedings of the 21st international jont conference on Artifical intelligence, p.1463-1468, July 11-17, 2009, Pasadena, California, USA

13 Dean Rubine, Specifying gestures by example, ACM SIGGRAPH Computer Graphics, v.25 n.4, p.329-337, July 1991

14 Ben Steichen , Giuseppe Carenini , Cristina Conati, User-adaptive information visualization: using eye gaze data to infer visualization tasks and user cognitive abilities, Proceedings of the 2013 international conference on Intelligent user interfaces, March 19-22, 2013, Santa Monica, California, USA

15 {SW14} Schmidt M., Weber G.: Prediction of multi-touch gestures during input. In Human-Computer Interaction. Advanced Interaction Modalities and Techniques, vol. 8511 of Lecture Notes in Computer Science. Springer International Publishing, 2014, pp. 158--169.

16 Caglar Tirkaz , Berrin Yanikoglu , T. Metin Sezgin, Sketched symbol recognition with auto-completion, Pattern Recognition, v.45 n.11, p.3926-3937, November, 2012


Full Text

intern file

Sonstige Links