5.00 score from hupso.pl for:
yatani.jp



HTML Content


Titlekoji yatani, ph.d. - the university of tokyo

Length: 44, Words: 9
Description pusty

Length: 0, Words: 0
Keywords koji, yatani, ??j, ?_?i, ?₽??, ??????, ubicomp, mobile, hci, user interface, university of tokyo
Robots
Charset UTF-8
Og Meta - Title pusty
Og Meta - Description pusty
Og Meta - Site name pusty
Tytuł powinien zawierać pomiędzy 10 a 70 znaków (ze spacjami), a mniej niż 12 słów w długości.
Meta opis powinien zawierać pomiędzy 50 a 160 znaków (łącznie ze spacjami), a mniej niż 24 słów w długości.
Kodowanie znaków powinny być określone , UTF-8 jest chyba najlepszy zestaw znaków, aby przejść z powodu UTF-8 jest bardziej międzynarodowy kodowaniem.
Otwarte obiekty wykresu powinny być obecne w stronie internetowej (więcej informacji na temat protokołu OpenGraph: http://ogp.me/)

SEO Content

Words/Characters 6968
Text/HTML 55.61 %
Headings H1 0
H2 0
H3 0
H4 0
H5 0
H6 0
H1
H2
H3
H4
H5
H6
strong
b
i
em
Bolds strong 0
b 0
i 0
em 0
Zawartość strony internetowej powinno zawierać więcej niż 250 słów, z stopa tekst / kod jest wyższy niż 20%.
Pozycji używać znaczników (h1, h2, h3, ...), aby określić temat sekcji lub ustępów na stronie, ale zwykle, użyj mniej niż 6 dla każdego tagu pozycje zachować swoją stronę zwięzły.
Styl używać silnych i kursywy znaczniki podkreślić swoje słowa kluczowe swojej stronie, ale nie nadużywać (mniej niż 16 silnych tagi i 16 znaczników kursywy)

Statystyki strony

twitter:title pusty
twitter:description pusty
google+ itemprop=name pusty
Pliki zewnętrzne 2
Pliki CSS 1
Pliki javascript 1
Plik należy zmniejszyć całkowite odwołanie plików (CSS + JavaScript) do 7-8 maksymalnie.

Linki wewnętrzne i zewnętrzne

Linki 174
Linki wewnętrzne 141
Linki zewnętrzne 33
Linki bez atrybutu Title 174
Linki z atrybutem NOFOLLOW 0
Linki - Użyj atrybutu tytuł dla każdego łącza. Nofollow link jest link, który nie pozwala wyszukiwarkom boty zrealizują są odnośniki no follow. Należy zwracać uwagę na ich użytkowania

Linki wewnętrzne

- images/me_large.jpg
???{?? /j/
research #research
publication publication.php
cv koji_yatani_cv.pdf
hci stats wiki /hcistats
professional activities #professional
biography #biography
my wiki /hcistats
[pdf] paper/sa2015.pdf
see the publication list publication.php
see the publication list #
hide the publication list #
[pdf] paper/chi2015.pdf
see the publication list publication.php
see the publication list #
hide the publication list #
[pdf] paper/cscw2015.pdf
see the publication list publication.php
see the publication list #
hide the publication list #
[pdf] paper/mobilehci2014_reviewcollage.pdf
see the publication list publication.php
see the publication list #
hide the publication list #
[pdf] paper/mobilehci2014_talkzones.pdf
see the publication list publication.php
see the publication list #
hide the publication list #
[pdf] paper/chi2014_mobileoveruse.pdf
see the publication list publication.php
see the publication list #
hide the publication list #
[pdf] paper/chi2014_turningpoint.pdf
see the publication list publication.php
see the publication list #
hide the publication list #
[pdf] paper/chi2014_pitchperfect.pdf
see the publication list publication.php
see the publication list #
hide the publication list #
[pdf] paper/ijmhci2013.pdf
see the publication list publication.php
see the publication list #
hide the publication list #
[pdf] paper/chi2013_sidepoint.pdf
see the publication list publication.php
see the publication list #
hide the publication list #
[pdf] paper/chi2013_hyperslides.pdf
see the publication list publication.php
see the publication list #
hide the publication list #
[pdf] paper/iconference2013.pdf
see the publication list publication.php
see the publication list #
hide the publication list #
[pdf] paper/ubicomp2012.pdf
see the publication list publication.php
see the publication list #
hide the publication list #
[pdf] paper/chi2012.pdf
see the publication list publication.php
see the publication list #
hide the publication list #
[pdf] paper/cscw2012.pdf
see the publication list publication.php
see the publication list #
hide the publication list #
[pdf] paper/its2011.pdf
see the publication list publication.php
see the publication list #
hide the publication list #
[pdf] paper/uist2011.pdf
see the publication list publication.php
see the publication list #
hide the publication list #
[pdf] paper/ijcai2011.pdf
[pdf] paper/chi2011.pdf
see the publication list publication.php
see the publication list #
hide the publication list #
[pdf] paper/uist2010_footgesture.pdf
see the publication list publication.php
see the publication list #
hide the publication list #
[pdf] paper/uist2010_penplustouch.pdf
[pdf] paper/chi_alt2010.pdf
see the publication list publication.php
see the publication list #
hide the publication list #
[pdf] paper/uist2009_ds.pdf
[pdf] paper/uist2009.pdf
see the publication list publication.php
see the publication list #
hide the publication list #
[pdf] paper/vlhcc2010.pdf
[pdf] paper/chi2009.pdf
see the publication list publication.php
see the publication list #
hide the publication list #
[pdf] paper/ieeepervasive2009.pdf
see the publication list publication.php
see the publication list #
hide the publication list #
[pdf] paper/chi2008.pdf
see the publication list publication.php
see the publication list #
hide the publication list #
[pdf] paper/pmc2009.pdf
[pdf] paper/mobilehci2007.pdf
see the publication list publication.php
see the publication list #
hide the publication list #
[pdf] paper/icec2005.pdf
[pdf] paper/ubicomp2005_w5.pdf
see the publication list publication.php
see the publication list #
hide the publication list #
[pdf] paper/tencon2005.pdf
see the publication list publication.php
see the publication list #
hide the publication list #
[pdf] paper/ieice_trans2006.pdf
[pdf] paper/chi2005.pdf
[pdf] paper/wmcsa2004.pdf
see the publication list publication.php
see the publication list #
hide the publication list #
[pdf] paper/ace2005.pdf
see the publication list publication.php
see the publication list #
hide the publication list #
[pdf] paper/scj2004.pdf
[pdf] paper/wmte2004.pdf
see the publication list publication.php
see the publication list #
hide the publication list #

Linki zewnętrzne

interactive intelligent systems laboratory http://iis-lab.org/
this page http://iis-lab.org/prospective/
teaching http://yatani.jp/teaching/doku.php
photos http://www.flickr.com/photos/yatani/collections/
[video] https://www.youtube.com/watch?v=w0ymwiy6sa4
[video] https://www.youtube.com/watch?v=qdpg2e0rybg
[video] https://www.youtube.com/watch?v=90mplqvhjjw
[video] http://youtu.be/pjtdotap6jq
[video] http://www.youtube.com/watch?v=ns-bih8p8iu
[publisher] http://dx.doi.org/10.1145/2076354.2076378
[publisher] http://dx.doi.org/10.1145/2047196.2047257
[publisher] http://dx.doi.org/10.1145/1978942.1979167
[video] http://www.youtube.com/watch?v=onskqg4akow
[video] http://www.youtube.com/watch?v=9stglyh8qws
[publisher] http://doi.acm.org/10.1145/1753846.1753865
[video] http://www.youtube.com/watch?v=yj70xyppzva
[publisher] http://doi.acm.org/10.1145/1622176.1622198
[publisher] http://doi.acm.org/10.1145/1518701.1518853
[publisher] http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4736478
[video] http://www.youtube.com/watch?v=x3nezswkkkw
[publisher] http://doi.acm.org/10.1145/1357054.1357104
[publisher] http://dx.doi.org/10.1016/j.pmcj.2009.04.002
[publisher] http://doi.acm.org/10.1145/1377999.1378059
[video] http://www.youtube.com/watch?v=fnwokx6syrk
[publisher] http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4085235
[publisher] http://ietisy.oxfordjournals.org/cgi/content/abstract/e89-d/1/150
[video] http://www.youtube.com/watch?v=74wlt4wnyq4
[publisher] http://doi.acm.org/10.1145/1056808.1057046
[publisher] http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1377323
[publisher] http://doi.acm.org/10.1145/1178477.1178478
[publisher] http://doi.wiley.com/10.1002/scj.10696
[video] http://www.youtube.com/watch?v=kumzkjqy_tc
[publisher] http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1281344

Zdjęcia

Zdjęcia 33
Zdjęcia bez atrybutu ALT 33
Zdjęcia bez atrybutu TITLE 33
Korzystanie Obraz ALT i TITLE atrybutu dla każdego obrazu.

Zdjęcia bez atrybutu TITLE

images/me.png
images/sa2015_logo.png
images/statistics.png
images/animation.png
images/stylesnap.png
images/nugu.png
images/reviewcollage.png
images/talkzones.png
images/mobile_overuse.png
images/turningpoint.png
images/pitchperfect.png
images/escapekeyboard.png
images/sidepoint.png
images/hyperslides.png
images/dementia_care_china.png
images/bodyscope.png
images/spacesense.png
images/tactilecollabo.png
images/unimanual_selection.png
images/oneline.png
images/revspot.png
images/foot_pocket.png
images/manudesk.png
images/semfeel.png
images/oss_diagram.png
images/phone_sustainability.png
images/escape.png
images/eval_stylus_based.png
images/arhunter.png
images/ultrasonic_phase.png
images/toss-it.png
images/pi_book.png
images/musex.png

Zdjęcia bez atrybutu ALT

images/me.png
images/sa2015_logo.png
images/statistics.png
images/animation.png
images/stylesnap.png
images/nugu.png
images/reviewcollage.png
images/talkzones.png
images/mobile_overuse.png
images/turningpoint.png
images/pitchperfect.png
images/escapekeyboard.png
images/sidepoint.png
images/hyperslides.png
images/dementia_care_china.png
images/bodyscope.png
images/spacesense.png
images/tactilecollabo.png
images/unimanual_selection.png
images/oneline.png
images/revspot.png
images/foot_pocket.png
images/manudesk.png
images/semfeel.png
images/oss_diagram.png
images/phone_sustainability.png
images/escape.png
images/eval_stylus_based.png
images/arhunter.png
images/ultrasonic_phase.png
images/toss-it.png
images/pi_book.png
images/musex.png

Ranking:


Alexa Traffic
Daily Global Rank Trend
Daily Reach (Percent)









Majestic SEO











Text on page:

???{?? koji yatani, ph.d. associate professor interactive intelligent systems laboratory department of electrical engineering and information systems (eeis) school of engineering the university of tokyo (also affiliated with emerging design and informatics course, interfaculty initiative in information studies, the university of tokyo) i received my ph.d. at university of toronto under the supervision by prof. khai n. truong at dynamic graphics project.my research interests lie in human-computer interaction (hci) and ubiquitous computing. more specifically, i am interested in developing new sensing technologies to support user interactions in mobile/ubiquitous computing environments, and interactive systems with computational linguistics methods. i am also interested in deeply evaluating interactive systems through quantitative and qualitative approaches. besides hci and ubiquitous computing, i am interested in machine learning, statistical analysis, computational linguistics, psychology, and physiology. if you are interested in joining our lab, please take a look at this page, and contact koji by email. contact office: faculty of engineering building 2, 7-3-1, hongo, bunkyo-ku, tokyo, japan. e-mail: [my_given_name (starting from k)] "at" iis-lab.org research / publication / teaching / cv / hci stats wiki professional activities / biography / photos about me talk at siggraph asia 2015 we have a paper at siggraph asia 2015. this work is in collaboration with jun and li-yi from university of hong kong, and takaaki from microsoft research. statistics for hci research i have published my wiki about some statistical methods useful for hci research (with an emphasis on r). if you are using r and/or know about statistics well, your feedback would be greatly appreciated. research autocomplete hand-drawn animations hand-drawn animation is a major art form and communication edium, but can be challenging to produce. we present a system to help people create frame-by-frame animations through hand-drawn sketches. we design our interface to be minimalistic; it contains only a canvas for sketches and a few controls. when users are drawing on the canvas, our system silently analyzes all past sketches and predicts what might be drawn in the future across both spatial locations and temporal frames. the interface also offers users suggestions to beautify existing drawings. these predictions and suggestions greatly reduce the workload on creating multiple frames for animation and also help to create desirable results. users can accept, ignore, or modify such predictions visualized on the canvas by simple gestures. our method considers both high level structures and low level repetitions, and can significantly reduce manual workload while help produce better results. we evaluate our system through a preliminary user study and confirm that it can enhance both users?f objective performance and subjective satisfaction. jun xing, li-yi wei, takaaki shiratori, and koji yatani. "autocomplete hand-drawn animations." to appear in proceedings of the siggraph asia 2015, 2015. [pdf] [video] see the publication list see the publication list hide the publication list mixed-initiative approaches to global editing in slideware good alignment and repetition of objects across presentation slides can facilitate visual processing and contribute to audience understanding. however, creating and maintaining such consistency during slide design is difficult. to solve this problem, we present two complementary tools: (1) stylesnap, which increases the alignment and repetition of objects by adaptively clustering object edge positions and allowing parallel editing of all objects snapped to the same spatial extent; and (2) flashformat, which infers the least-general generalization of editing examples and applies it throughout the selected range. in user studies of repetitive styling task performance, stylesnap and flashformat were 4-5 times and 2-3 times faster respectively than conventional editing. both use a mixed-initiative approach to improve the consistency of slide decks and generalize to any situations involving direct editing across disjoint visual spaces. darren edge, sumit gulwani, natasa milic-frayling, mohammad raza, reza adhitya saputra, chao wang, and koji yatani. "" in proceedings of the sigchi conference on human factors in computing systems (chi 2015), pp. 3503 -- 3512, april 2015. [pdf] see the publication list see the publication list hide the publication list nugu: a group-based intervention app for improving self-regulation of limiting smartphone use our preliminary study reveals that individuals use various management strategies for limiting smartphone use, ranging from keeping smartphones out of reach to removing apps. however, we also found that users often had difficulties in maintaining their chosen management strategies due to the lack of self-regulation. in this paper, we present nugu, a group-based intervention app for improving self-regulation on smartphone use through leveraging social support: groups of people limit their use together by sharing their limiting information. nugu is designed based on social cognitive theory and it has been developed iteratively through two pilot tests. our three-week user study (n = 62) demonstrated that compared with its non-social counterpart, the nugu users?f usage amount significantly decreased and their perceived level of managing disturbances improved. furthermore, our exit interview confirmed that nugu?fs design elements are effective for achieving limiting goals. minsam ko, subin yang, joonwon lee, christian heizmann, jinyoung jeong, uichin lee, daehee shin, koji yatani, junehwa song, and kyong-mee chung. "nugu: a group-based intervention app for improving self-regulation of limiting smartphone use" in proceedings of the acm conference on computer supported cooperative work and social computing (cscw 2015), pp. 1235 -- 1245, febrary 2015. [pdf] see the publication list see the publication list hide the publication list reviewcollage: a mobile interface for direct comparison using online reviews review comments posted in online websites can help the user decide a product to purchase or place to visit. they can also be useful to closely compare a couple of candidate entities. however, the user may have to read different webpages back and forth for comparison, and this is not desirable particularly when she is using a mobile device. we present reviewcollage, a mobile interface that aggregates information about two reviewed entities in a one-page view. reviewcollage uses attribute-value pairs, known to be effective for review text summarization, and highlights the similarities and differences between the entities. our user study confirms that reviewcollage can support the user to compare two entities and make a decision within a couple of minutes, at least as quickly as existing summarization interfaces. it also reveals that reviewcollage could be most useful when two entities are very similar. haojian jin, tetsuya sakai, and koji yatani. "reviewcollage: a mobile interface for direct comparison using online reviews" in proceedings of the acm sigchi international conference on human computer interaction with mobile devices & services (mobilehci 2014), pp. 349 -- 358, september 2014. honorable mention award winner [pdf] [video] see the publication list see the publication list hide the publication list talkzones: section-based time support for presentations managing time while presenting is challenging, but mobile devices offer both convenience and flexibility in their ability to support the end-to-end process of setting, refining, and following presentation time targets. from an initial hci-q study of 20 presenters, we identified the need to set such targets per ?gzone?h of consecutive slides (rather than per slide or for the whole talk), as well as the need for feedback that accommodates two distinct attitudes towards presentation timing. these findings led to the design of talkzones, a mobile application for timing support. when giving a 20-slide, 6m40s rehearsed but interrupted talk, 12 participants who used talkzones registered a mean overrun of only 8s, compared with 1m49s for 12 participants who used a regular timer. we observed a similar 2% overrun in our final study of 8 speakers giving rehearsed 30-minute talks in 20 minutes. overall, we show that talkzones can encourage presenters to advance slides before it is too late to recover, even under the adverse timing conditions of short and shortened talks. bahador saket, sijie yang, hong tan, koji yatani, and darren edge. "talkzones: section-based time support for presentations" in proceedings of the acm sigchi international conference on human computer interaction with mobile devices & services (mobilehci 2014), pp. 263 -- 272, september 2014. honorable mention award winner [pdf] [video] see the publication list see the publication list hide the publication list hooked on smartphones: an exploratory study on smartphone overuse among college students the impact of smartphone addiction on young adults, such as sleep deprivation and attention deficits, are increasingly being recognized. this emerging issue motivated us to identify smartphone usage patterns relating to smartphone addiction. we investigate smartphone usage for 95 college students using surveys, logged data, and interviews. we first divide the participants into risk and non-risk groups based on self-reported psychometric scale data about smartphone addiction. we then analyze the usage data to uncover between-group usage differences, ranging from overall usage patterns to app-specific usage patterns. our results reveal that compared to the non-risk group, the risk group has longer usage time per day and differences in diurnal usage. the risk group is more susceptible to push notifications, and tends to consume more online content. we identify a relationship between usage features and smartphone addiction with analytic modeling and provide detailed illustration of problematic usage behavior from interview data. uichin lee, joonwon lee, minsam ko, changhun lee, yuhwan kim, subin yang, koji yatani, gahgene gweon, kyong-mee chung, and junehwa song. "hooked on smartphones: an exploratory study on smartphone overuse among college students" in proceedings of the sigchi conference on human factors in computing systems (chi 2014), pp. 2327 -- 2336, april 2014. [pdf] see the publication list see the publication list hide the publication list turningpoint: narrative-driven presentation planning once upon a time, people told stories unencumbered by slides. what modern presentations gain through visual slide support, however, is often at the expense of storytelling. we present turningpoint, a probe to investigate the potential use of narrative-driven talk planning in slideware. our study of turningpoint reveals a delicate balance between narrative templates focusing author attention in ways that save time, and fixating attention in ways that limit experimentation. larissa pschetz, koji yatani, and darren edge. "turningpoint: narrative-driven presentation planning" in proceedings of the sigchi conference on human factors in computing systems (chi 2014), pp. 1591 -- 1594, april 2014. honorable mention award winner [pdf] see the publication list see the publication list hide the publication list pitchperfect: integrated rehearsal environment for structured presentation preparation rehearsal is a critical component of preparing to give an oral presentation, yet it is frequently abbreviated, performed in ways that are inefficient or ineffective, or simply omitted. we conducted an exploratory study to understand the relationship between the theory and practice of presentation rehearsal, classifying our qualitative results into five themes to motivate more structured rehearsal support deeply integrated in slide presentation software. in a within-subject study (n=12) comparing against participants?f existing rehearsal practices, we found that our resulting pitchperfect system significantly improved overall presentation quality and content coverage as well as provided greater support for content mastery, time management, and confidence building. ha trinh, koji yatani, and darren edge. "pitchperfect: integrated rehearsal environment for structured presentation preparation" in proceedings of the sigchi conference on human factors in computing systems (chi 2014), pp. 1571 -- 1580, april 2014. honorable mention award winner [pdf] see the publication list see the publication list hide the publication list escape-keyboard: a sight-free one-handed text entry method for mobile touch-screen devices mobile text entry methods traditionally have been designed with the assumption that users can devote full visual and mental attention on the device though this is not always possible. in this paper, we present the design and evaluation of escape-keyboard, a sight-free text entry method for mobile touch-screen devices. escape-keyboard allows the user to type letters with one hand by pressing the thumb on different areas of the screen and consequently performing a flick gesture. our user study showed that participants reached an average typing speed of 14.7 words per minute (wpm) with 4.4% error rate in the sight-free condition and 16.8 wpm with 1.7% error rate in the sighted condition after 16 typing sessions. our qualitative results indicate that the participants had difficulty learning the keyboard layout, which led to slow typing speed improvements over time. we thus implemented and evaluated features to mitigate this learnability issue. we also performed a theoretical analysis of sight-free performance of our keyboard, which predicts expert peak performance to be 39 wpm. nikola banovi, koji yatani, and khai n. truong. "escape-keyboard: a sight-free one-handed text entry method for mobile touch-screen devices" in international journal of mobile human computer interaction, vol. 5, no. 3, pp. 42 -- 61, 2013. [pdf] see the publication list see the publication list hide the publication list sidepoint: a peripheral knowledge panel for presentation slide authoring presentation authoring is an important activity, but often requires the secondary task of collecting the information and media necessary for both slides and speech. integration of implicit search and peripheral displays into presentation authoring tools may reduce the effort to satisfy not just active needs the author is aware of, but also latent needs that she is not aware of until she encounters content of perceived value. we develop sidepoint, a peripheral panel that supports presentation authoring by showing concise knowledge items relevant to the slide content. we study sidepoint as a technology probe to examine the benefits and issues associated with peripheral knowledge panels for presentation authoring. our results show that peripheral knowledge panels have the potential to satisfy both types of needs in ways that transform presentation authoring for the better. yefeng liu, darren edge, and koji yatani. "sidepoint: a peripheral knowledge panel for presentation slide authoring" in proceedings of the sigchi conference on human factors in computing systems (chi 2013), pp. 681 -- 684, may 2013. [pdf] [video] see the publication list see the publication list hide the publication list hyperslides: dynamic presentation prototyping presentations are a crucial form of modern communication, yet there is a dissonance between everyday practices with presentation tools and best practices from the presentation literature. we conducted a grounded theory study to gain a better understanding of the activity of presenting, discovering the potential for a more dynamic, automated, and story-centered approach to prototyping slide presentations that are themselves dynamic in their ability to help presenters rehearse and deliver their story. our prototype tool for dynamic presentation prototyping, which we call hyperslides, uses a simple markup language for the creation of hierarchically structured scenes, which are algorithmically transformed into hyperlinked slides of a consistent and minimalist style. our evaluation suggests that hyperslides helps idea organization, saves authoring time, creates aesthetic layouts, and supports more flexible rehearsal and delivery than linear slides, at the expense of reduced layout control and increased navigation demands. darren edge, joan m. savage, and koji yatani. "hyperslides: dynamic presentation prototyping" in proceedings of the sigchi conference on human factors in computing systems (chi 2013), pp. 671 -- 680, may 2013. [pdf] see the publication list see the publication list hide the publication list communication and coordination for institutional dementia caregiving in china with a general trend worldwide towards greater life expectancies, interventions and tools that can help caregivers working in elder care are becoming increasingly important. in china, with a greater number and proportion of elders due to the long-term effects of the one-child policy, these interventions and tools are needed even more. improved communication between care staff of an institutional home can reduce medical errors and improve coordination of care. at the same time, increased conversation with elders with cognitive impairments like dementia or alzheimer's can help the elder to maintain their cognitive ability, and can reduce negative feelings like loneliness. our qualitative study with eleven institutional caregivers in beijing delved into the communication patterns that exist between caregivers and elders with dementia. we found that knowing more about each individual resident's disposition and personal history was helpful in maintaining quality care, that many care staff in china use placating talk as a means to calm or guide elders to a desired action, and that care staff found the topic of past careers or past 'glories' to be the most efficient in getting elders to chat. in addition, we also found that much of the information that is gleaned through working with an elder long-term is not recorded or shared in any official capacity with other care workers, an area where technology could be particularly helpful. claire l. barco, koji yatani, yuanye ma, candra k. gill, and joyojeet pal. "information management and communication for dementia: preliminary research from china" in proceedings of iconference 2013, pp. 571 -- 575, febrary 2013. [pdf] see the publication list see the publication list hide the publication list bodyscope: a wearable acoustic sensor for activity recognition accurate activity recognition enables the development of a variety of ubiquitous computing applications, such as context-aware systems, lifelogging, and personal health systems. wearable sensing technologies can be used to gather data for activity recognition without requiring sensors to be installed in the infrastructure. however, the user may need to wear multiple sensors for accurate recognition of a larger number of different activities. we developed a wearable acoustic sensor, called bodyscope, to record the sounds produced in the user's throat area and classify them into user activities, such as eating, drinking, speaking, laughing, and coughing. the f-measure of the support vector machine classification of 12 activities using only our bodyscope sensor was 79.5%. we also conducted a small-scale in-the-wild study, and found that bodyscope was able to identify four activities (eating, drinking, speaking, and laughing) at 71.5% accuracy. koji yatani, and khai n. truong. "bodyscope: a wearable acoustic sensor for activity recognition" in proceedings of international conference on ubiquitous computing (ubicomp 2012), pp. 341 -- 350, september 2012. [pdf] [video] see the publication list see the publication list hide the publication list spacesense: representing geographical information to visually impaired people using spatial tactile feedback learning an environment can be challenging for people with visual impairments. braille maps allow their users to understand the spatial relationship between a set of places. however, physical braille maps are often costly, may not always cover an area of interest with sufficient detail, and might not present up-to-date information. we built a handheld system for representing geographical information called spacesense, which includes custom spatial tactile feedback hardware-multiple vibration motors attached to different locations on a mobile touch-screen device. it offers high-level information about the distance and direction towards a destination and bookmarked places through vibrotactile feedback to help the user maintain the spatial relationships between these points. spacesense also adapts a summarization technique for online user reviews of public and commercial venues. our user study shows that participants could build and maintain the spatial relationships between places on a map more accurately with spacesense compared to a system without spatial tactile feedback. they pointed specifically to having spatial tactile feedback as the contributing factor in successfully building and maintaining their mental map. koji yatani, nikola banovic, and khai. n truong. "spacesense: representing geographical information to visually impaired people using spatial tactile feedback" in proceedings of the sigchi conference on human factors in computing systems (chi 2012), pp. 415 -- 424, may 2012. [pdf] see the publication list see the publication list hide the publication list investigating effects of visual and tactile feedback on spatial coordination in collaborative handheld systems mobile and handheld devices have become platforms to support remote collaboration. but, their small form-factor may impact the effectiveness of the visual feedback channel often used to help users maintain an awareness of their partner's activities during synchronous collaborative tasks. we investigated how visual and tactile feedback affects collaboration on mobile devices, with emphasis on spatial coordination in a shared workspace. from two user studies, our results highlight different benefits of each feedback channel in collaborative handheld systems. visual feedback can provide precise spatial information for collaborators, but degrades collaboration when the feedback is occluded, and sometimes can distract the user's attention. spatial tactile feedback can reduce the overload of information in visual space and gently guides the user's attention to an area of interest. our results also show that visual and tactile feedback can complement each other, and systems using both feedback channels can support better spatial coordination than systems using only one form of feedback. koji yatani, darren gergle, and khai n. truong. "investigating effects of visual and tactile feedback on spatial coordination in collaborative handheld systems" in proceedings of the acm conference on computer supported cooperative work (cscw 2012), pp. 661 -- 670, febrary 2012. [pdf] see the publication list see the publication list hide the publication list design of unimanual multi-finger pie menu interaction context menus, most commonly the right click menu, are a traditional method of interaction when using a keyboard and mouse. context menus make a subset of commands in the application quickly available to the user. however, on tabletop touchscreen computers, context menus have all but disappeared. in this work, we investigate how to design context menus for efficient unimanual multi-touch use. we investigate the limitations of the arm, wrist, and fingers and how it relates to human performance of multi-targets selection tasks on multi-touch surface. we show that selecting targets with multiple fingers simultaneously improves the performance of target selection compared to traditional single finger selection, but also increases errors. informed by these results, we present our own context menu design for horizontal tabletop surfaces. nikola banovic, frank chun yat li, david dearman, koji yatani, and khai n. truong. "design of unimanual multi-finger pie menu interaction" in proceedings of the acm conference on interactive tabletops and surfaces (its 2011), pp. 120 -- 129, november 2011. [pdf] [publisher] see the publication list see the publication list hide the publication list the 1line keyboard: a qwerty layout in a single line current soft qwerty keyboards often consume a large portion of the screen space on portable touchscreens. this space consumption can diminish the overall user experience on these devices. in this work, we present the 1line keyboard, a soft qwerty keyboard that is 140 pixels tall (in landscape mode) and 40% of the height of the native ipad qwerty keyboard. our keyboard condenses the three rows of keys in the normal qwerty layout into a single line with eight keys. the sizing of the eight keys is based on users' mental layout of a qwerty keyboard on an ipad. the system disambiguates the word the user types based on the sequence of keys pressed. the user can use flick gestures to perform backspace and enter, and tap on the bezel below the keyboard to input a space. through an evaluation, we show that participants are able to quickly learn how to use the 1line keyboard and type at a rate of over 30 wpm after just five 20-minute typing sessions. using a keystroke level model, we predict the peak expert text entry rate with the 1line keyboard to be 66-68 wpm. frank chun yat li, richard t. guy, koji yatani, and khai n. truong. "the 1line keyboard: a qwerty layout in a single line" in proceedings of the acm symposium on user interface software and technology (uist 2011), pp. 461 -- 470, october 2011. [pdf] [publisher] see the publication list see the publication list hide the publication list review spotlight: a user interface for summarizing user-generated reviews using adjective-noun word pairs many people read online reviews written by other users to learn more about a product or venue. however, the overwhelming amount of user-generated reviews and variance in length, detail and quality across the reviews make it difficult to glean useful information. in this work, we present the iterative design of our system, called review spotlight. it provides a brief overview of reviews using adjective-noun word pairs, and allows the user to quickly explore the reviews in greater detail. through a laboratory user study which required participants to perform decision making tasks, we showed that participants could form detailed impressions about restaurants and decide between two options significantly faster with review spotlight than with traditional review webpages. koji yatani, michael novati, andrew trusty, and khai n. truong. "analysis of adjective-noun word pair extraction methods for online review summarization" in proceedings of the international joint conference on artificial intelligence (ijcai 2011), pp. 2771 -- 2776, july 2011. [pdf] koji yatani, michael novati, andrew trusty, and khai n. truong. "review spotlight: a user interface for summarizing user-generated reviews using adjective-noun word pairs" in proceedings of the sigchi conference on human factors in computing systems (chi 2011), pp. 1541 -- 1550, april 2011. best paper award winner [pdf] [publisher] see the publication list see the publication list hide the publication list sensing foot gestures from the pocket visually demanding interfaces on a mobile phone can diminish the user experience by monopolizing the user's attention when they are focusing on another task and impede accessibility for visually impaired users. because mobile devices are often located in pockets when users are mobile, explicit foot movements can be defined as eyes-and-hands-free input gestures for interacting with the device. in this work, we study the human capability associated with performing foot-based interactions which involve lifting and rotation of the foot when pivoting on the toe and heel. building upon these results, we then developed a system to learn and recognize foot gestures using a single commodity mobile phone placed in the user's pocket or in a holster on their hip. our system uses acceleration data recorded by a built-in accelerometer on the mobile device and a machine learning approach to recognizing gestures. through a lab study, we demonstrate that our system can classify ten different foot gestures at approximately 86% accuracy. jeremy scott, david dearman, koji yatani, and khai n. truong. "sensing foot gestures from the pocket" in proceedings of the acm symposium on user interface software and technology (uist 2010), pp. 199 -- 208, october 2010. [pdf] [video] see the publication list see the publication list hide the publication list pen + touch = new tools (also known as manual deskterity) we describe techniques for direct pen+touch input. we observe people's manual behaviors with physical paper and notebooks. these serve as the foundation for a prototype microsoft surface application, centered on note-taking and scrapbooking of materials. based on our explorations we advocate a division of labor between pen and touch: the pen writes, touch manipulates, and the combination of pen+touch yields new tools. this articulates how our system interprets unimodal pen, unimodal touch, and multimodal pen+touch inputs, respectively. for example, the user can hold a photo and drag off with the pen to create and place a copy; hold a photo and cross it in a freeform path with the pen to slice it in two; or hold selected photos and tap one with the pen to staple them all together. touch thus unifies object selection with mode switching of the pen, while the muscular tension of holding touch serves as the glue that phrases together all the inputs into a unitary multimodal gesture. this helps the ui designer to avoid encumbrances such as physical buttons, persistent modes, or widgets that detract from the user's focus on the workspace. ken hinckley, koji yatani, michel pahud, nicole coddington, jenny rodenhouse, andy wilson, hrvoje benko, and bill buxton. "pen + touch = new tools" in proceedings of the acm symposium on user interface software and technology (uist 2010), pp. 27 -- 36, october 2010. [pdf] ken hinckley, koji yatani, michel pahud, nicole coddington, jenny rodenhouse, andy wilson, hrvoje benko, and bill buxton. "manual deskterity : an exploration of simultaneous pen + touch direct input" in extended abstracts of the sigchi conference on human factors in computing systems (chi 2010), pp. 2793 -- 2802, april 2010. [pdf] [video] [publisher] see the publication list see the publication list hide the publication list semfeel: a user interface with semantic tactile feedback for mobile touch-screen devices one of the challenges with using mobile touch-screen devices is that they do not provide tactile feedback to the user. thus, the user is required to look at the screen to interact with these devices. in this paper, we present semfeel, a tactile feedback system which informs the user about the presence of an object where she touches on the screen and can offer additional semantic information about that item. through multiple vibration motors that we attached to the backside of a mobile touch-screen device, semfeel can generate different patterns of vibration, such as ones that flow from right to left or from top to bottom, to help the user interact with a mobile device. through two user studies, we show that users can distinguish ten different patterns, including linear patterns and a circular pattern, at approximately 90% accuracy, and that semfeel supports accurate eyes-free interactions. koji yatani. "towards designing user interfaces on mobile touch-screen devices for people with visual impairment" in extended abstract of the acm symposium on user interface software and technology (uist 2009), pp. 37 -- 40, october 2009. [pdf] koji yatani, and khai n. truong. "semfeel: a user interface with semantic tactile feedback for mobile touch-screen devices" in proceedings of the acm symposium on user interface software and technology (uist 2009), pp. 111 -- 120, october 2009. [pdf] [video] [publisher] see the publication list see the publication list hide the publication list understanding how and why open source contributors use diagrams some of the most interesting differences between open source software (oss) development and commercial colocated software development lie in the communication and collaboration practices of these two groups of developers. one interesting practice is that of diagramming. though well studied and important in many aspects of co-located software development (including communication and collaboration among developers), its role in oss development has not been thoroughly studied. in this project, we investigate how and why oss contributors use diagrams in their work. we explore differences in the use and practices of diagramming, their possible reasons, and present design considerations for potential systems aimed at better supporting diagram use in oss development. eunyoung chung, carlos jensen, koji yatani, victor kuechler, and khai n. truong. "sketching and drawing in the design of open source software" in proceedings of the ieee symposium on visual languages and human-centric computing (vl/hcc 2010), pp. 195 -- 202, september 2010. [pdf] koji yatani, eunyoung chung, carlos jensen and khai n. truong. "understanding how and why open source contributors use diagrams in the development of ubuntu" in proceedings of the sigchi conference on human factors in computing systems (chi 2009), pp. 995 -- 1004, april 2009. [pdf] [publisher] see the publication list see the publication list hide the publication list understanding mobile phone situated sustainability: the influence of local constraints and practices on transferability mobile phones are the most prevalent example of pervasive computing technologies in use today, with phone subscriptions reaching 3.3 billion in 2007. according to a 2005 estimate, consumers discard roughly 125 million mobile phones into landfills every year. although devices continue to proliferate, viable options for ecologically responsible solutions remain elusive, inaccessible, or unknown to users. we examine people's practices with mobile phones, particularly those surrounding end-of-use. we focus on the differences and commonalities between practices in north america, japan, and germany, and the impact of varying local constraints on mobile phone sustainability. building upon previous research examining sustainability and mobile phone ownership decisions, we explore the notion of situated sustainability by looking at how mobile phone sustainability is affected by local and community factors. elaine m. huang, koji yatani, khai n. truong, julie a. kientz and shwetak n. patel. "understanding mobile phone situated sustainability: the influence of local constraints and practices on transferability" in ieee pervasive computing, vol. 8, no. 1, pp. 46 -- 53, january 2009. [pdf] [publisher] see the publication list see the publication list hide the publication list escape: a target selection technique using visually-cued gestures many mobile devices have touch-sensitive screens that people interact with using fingers or thumbs. however, such interaction is difficult because targets become occluded, and because fingers and thumbs have low input resolution. recent research has addressed occlusion through visual techniques. however, the poor resolution of finger and thumb selection still limits selection speed. in this paper, we address the selection speed problem through a new target selection technique called escape. in escape, targets are selected by gestures cued by icon position and appearance. a user study shows that for targets six to twelve pixels wide, escape performs at a similar error rate and at least 30% faster than shift, an alternative technique, on a similar task. we evaluate escape's performance in different circumstances, including different icon sizes, icon overlap, use of color, and gesture direction. we also describe an algorithm that assigns icons to targets, thereby improving escape?fs performance. koji yatani, kurt partridge, marshall bern and mark w. newman. "escape: a target selection technique using visually-cued gestures" in proceedings of the sigchi conference on human factors in computing systems (chi 2008), pp. 285 -- 294, april 2008. [pdf] [video] [publisher] see the publication list see the publication list hide the publication list an evaluation of stylus-based text entry methods on handheld devices in stationary and mobile scenarios effective text entry on handheld devices remains a significant problem in the field of mobile computing. on a personal digital assistant (pda), text entry methods traditionally support input through the motion of a stylus held in the user's dominant hand. in this paper, we present the design of a two-handed software keyboard for a pda which specifically takes advantage of the thumb in the non-dominant hand. we compare our chorded keyboard design to other stylus-based text entry methods in an evaluation that studies user input in both stationary and mobile settings. our study shows that users type fastest using the miniqwerty keyboard, and most accurately using our two-handed keyboard. we also discovered a difference in input performance with the mini-qwerty keyboard between stationary and mobile settings. as a user walks, text input speed decreases while error rates and mental workload increases; however, these metrics remain relatively stable in our two-handed technique despite user mobility. koji yatani, and khai n. truong. "an evaluation of stylus-based text entry methods on handheld devices studied in different mobility states" in pervasive and mobile computing, vol. 5, no. 5, pp. 496 -- 506, october 2009. [pdf] [publisher] koji yatani and khai n. truong. "an evaluation of stylus-based text entry methods on handheld devices in stationary and mobile scenarios" in proceedings of the nineth acm sigchi international conference on human computer interaction with mobile devices & services (mobilehci 2007), pp. 145 -- 152, september 2007. [pdf] [publisher] see the publication list see the publication list hide the publication list a multiplayer whack-a-mole game using gestural input in a location-sensitive and immersive environment arhunter is a computer-enhanced multi-player whack-a-mole game. it creates an immersive entertainment environment combined with gestural input and location recognition technologies, which aims at increasing the level of players' engagement and excitement. koji yatani, masanori sugimoto and hiromichi hashizume. "a multiplayer whack-a-mole game using gestural input in a location-sensitive and immersive environment" in extended abstracts of international conference on entertainment computing (icec 2005), pp. 9 -- 12, september 2005. [pdf] [video] koji yatani, masanori sugimoto and hiromichi hashizume. "arhunter: a multiplayer game using gestural input in a location-sensitive and immersive environment" in workshop on ubiquitous computing, entertainment and games in the seventh international conference on ubiquitous computing (ubicomp 2005) , september 2005. [pdf] see the publication list see the publication list hide the publication list a fast and accurate positioning technique using the ultrasonic phase accordance method we developed a positioning technique using ultrasonic signals. our technique can accurately identify the relative distance and orientation between devices by using an one-time ultrasonic packet. the technique, which is named phase accordance method, uses two or more carriers in ultrasonic communication. a special ultrasonic burst signal, called a sync pattern in the header part of the communication packet gives the base point of the time measurement. the whole time difference calculation is then carried out using this base point. an experiment showed that the technique yielded errors of less than ?} 1 mm for 3 m distance measurements and less than 0.5 degree errors for smaller than 30 degree. hiromichi hashizume, ayumu kaneko, yusuke sugano, koji yatani and masanori sugimoto. "fast and accurate positioning technique using ultrasonic phase accordance method" in proceedings of the ieee region 10 conference (tencon 2005), pp. 826 -- 831, november 2005. [pdf] [publisher] see the publication list see the publication list hide the publication list toss-it: intuitive information transfer techniques for mobile devices toss-it provides intuitive information transfer techniques for mobile devices, by fully utilizing their mobility. a user of toss-it can send information from the user's pda to other electronic devices with a toss or swing action, as the user would toss a ball or deal cards to others. toss-it uses inertial sensors and optical markers to recognize the user's gestures and location. koji yatani, koiti tamura, keiichi hiroki, masanori sugimoto and hiromichi hashizume. "toss-it: intuitive information transfer techniques for mobile devices using toss and swing actions" in ieice transactions on systems and computers, vol. e89-d, no. 1, pp. 150 -- 157, january 2006. [pdf] [publisher] koji yatani, koiti tamura, keiichi hiroki, masanori sugimoto, and hiromichi hashizume. "toss-it: intuitive information transfer techniques for mobile devices" in extended abstracts of the sigchi conference on human factors in computing systems (chi 2005), pp. 1881 -- 1884, april 2005. [pdf] [video] [publisher] koji yatani, koiti tamura, masanori sugimoto, and hiromichi hashizume. "information transfer techniques for mobile devices by toss and swing actions" in proceedings of the sixth ieee workshop on mobile computing systems and applications (wmsca 2004), pp. 144 -- 151, december 2004. [pdf] [publisher] see the publication list see the publication list hide the publication list an interactive and enjoyable educational system in a museum we developed a system called pi_book to support children's exploration in a science museum. pi_book provides additional contents about exhibitions with pdas. additional contents on the pdas are designed to be interactive in order to increase the children's interests in exhibitions and contents on the pdas. our system help children have interests scientific phenomina which is often difficult to understand without any assistance. fusako kusunoki, takako yamaguti, takuichi nishimura, koji yatani and masanori sugimoto. "interactive and enjoyable interface in museum" in proceedings of the acm sigchi international conference on advances in computer entertainment technology (ace 2005), pp. 1 -- 8, june 2005. [pdf] [publisher] see the publication list see the publication list hide the publication list musex: a system for supporting children's collaborative learning in a museum with pdas musex supports children's collaborative learning and exploration in a museum with pdas (personal digital assistants). musex provides questions about exhibitions that are not interactive, such as explanatory panels and videos. in this manner, our system encourages children to look into these exhibitions. our user study with musex revealed that children interacted with exhibitions actively and were engaged in solving questions with musex. koji yatani, mayumi onuma, masanori sugimoto, and fusako kusunoki. "musex: a system for supporting children's collaborative learning in a museum with pdas" in systems and computers in japan, vol. 35, no. 14, pp. 54 -- 63, december 2004. [pdf] [publisher] koji yatani, masanori sugimoto, and fusako kusunoki. "musex: a system for supporting children's collaborative learning in a museum with pdas" in proceedings of the second ieee workshop on wireless and mobile technology in education (wmte 2004), pp. 109 -- 113, march 2004. [pdf] [video] [publisher] see the publication list see the publication list hide the publication list professional activities program chair ubicomp 2015 program committee ah; chi; mobisys; ubicomp; uist; world haptics conference conference committee video chair: ubicomp 2013; mentoring chair: its 2012 reviewer (journal) acm transactions on computer-human interaction; ieee transactions on haptics; international journal of human-computer studies (elsevier); pervasive and mobile computing (elsevier) reviewer (conference) ace; apchi; apsipa annual summit and conference; chi; cscw; dis; gi; internet of things conference; its (formally tabletop); iui; mobilehci; nordichi; pervasive; tei; ubicomp; uist; 3dui reviewer (japanese domestic journal) ieice transactions on fundamentals of electronics, communications and computer sciences; ieice transactions on information and systems; transactions of human interface society student volunteer chi (2010); ijcai (2011) biography dr. koji yatani (http://yatani.jp) is an associate professor in department of electrical engineering and information systems (eeis), school of engineering at the university of tokyo, where he leads interactive intelligent systems laboratory (http://iis-lab.org). he is also affiliated with emerging design and informatics course, interfaculty initiative in information studies. his main research interests lie in human-computer interaction (hci) and ubiquitous computing. his current research focus is to develop new applications and redesign user experience on mobile and wearable devices to better support user's creativity and productivity. he also extends his research interests to a broader range of topics in hci, including mobile interaction techniques, wearable sensing methods, quantitative and quantiative studies for understanding users, productivity tool design, and assistive technologies. he received his b.eng. and m.sci. from university of tokyo in 2003 and 2005, respectively, and his ph.d. in computer science from university of toronto in 2011. on november 2011, he joined hci group at microsoft research asia in beijing, china. on october 2013, he had an appointment of a visiting associate professor in graduate school of information science and technology, at the university of tokyo. he then joined the university of tokyo as a full-time associate professor on august 2014. he was a recipient of ntt docomo scholarship (october 2003 -- march 2005), and japan society for the promotion of science research fellowship for young scientists (april 2005 -- march 2006). he received two best paper awards at chi (2011 and 2016) as well as four honorable mention awards at chi (2014) and mobilehci (2014). he served as a program co-chair for ubicomp 2015. he also served as a program committee on major international conferences in the field of hci, ubiquitous computing and haptics, including chi (2013), ubicomp (2012 -- 2014), uist (2013), mobisys (2014), and whc (2013).


Here you find all texts from your page as Google (googlebot) and others search engines seen it.

Words density analysis:

Numbers of all words: 6940

One word

Two words phrases

Three words phrases

the - 5.4% (375)
and - 4.18% (290)
for - 1.96% (136)
use - 1.9% (132)
public - 1.33% (92)
list - 1.33% (92)
publication - 1.31% (91)
user - 1.11% (77)
chi - 1.02% (71)
are - 0.98% (68)
per - 0.97% (67)
with - 0.95% (66)
see - 0.86% (60)
mobile - 0.85% (59)
form - 0.84% (58)
that - 0.81% (56)
our - 0.79% (55)
system - 0.72% (50)
present - 0.71% (49)
all - 0.68% (47)
ten - 0.65% (45)
one - 0.65% (45)
yat - 0.63% (44)
can - 0.61% (42)
yatani - 0.61% (42)
koji - 0.61% (42)
[pdf] - 0.56% (39)
art - 0.55% (38)
pp. - 0.53% (37)
device - 0.53% (37)
how - 0.48% (33)
using - 0.48% (33)
yatani, - 0.45% (31)
conference - 0.45% (31)
devices - 0.43% (30)
hide - 0.43% (30)
computing - 0.43% (30)
systems - 0.43% (30)
interact - 0.42% (29)
proceedings - 0.42% (29)
his - 0.4% (28)
presentation - 0.4% (28)
review - 0.4% (28)
support - 0.37% (26)
rate - 0.37% (26)
able - 0.37% (26)
out - 0.37% (26)
touch - 0.37% (26)
information - 0.37% (26)
phone - 0.37% (26)
back - 0.37% (26)
app - 0.36% (25)
human - 0.35% (24)
lab - 0.35% (24)
feedback - 0.33% (23)
slide - 0.33% (23)
keyboard - 0.33% (23)
study - 0.33% (23)
design - 0.33% (23)
she - 0.32% (22)
visual - 0.32% (22)
hand - 0.32% (22)
this - 0.32% (22)
oss - 0.3% (21)
technique - 0.29% (20)
over - 0.29% (20)
also - 0.29% (20)
work - 0.27% (19)
interface - 0.27% (19)
pen - 0.27% (19)
part - 0.27% (19)
time - 0.27% (19)
labor - 0.27% (19)
text - 0.27% (19)
line - 0.26% (18)
from - 0.26% (18)
method - 0.26% (18)
develop - 0.24% (17)
between - 0.24% (17)
base - 0.24% (17)
through - 0.24% (17)
[publisher] - 0.24% (17)
users - 0.24% (17)
led - 0.24% (17)
computer - 0.24% (17)
perform - 0.23% (16)
screen - 0.23% (16)
truong - 0.23% (16)
has - 0.23% (16)
khai - 0.23% (16)
two - 0.23% (16)
sigchi - 0.23% (16)
spatial - 0.23% (16)
but - 0.23% (16)
their - 0.23% (16)
soft - 0.23% (16)
smartphone - 0.22% (15)
search - 0.22% (15)
tactile - 0.22% (15)
help - 0.22% (15)
input - 0.22% (15)
interaction - 0.22% (15)
not - 0.22% (15)
hci - 0.22% (15)
you - 0.22% (15)
based - 0.22% (15)
factor - 0.22% (15)
gesture - 0.22% (15)
video - 0.2% (14)
which - 0.2% (14)
2005 - 0.2% (14)
truong. - 0.2% (14)
research - 0.2% (14)
call - 0.19% (13)
factors - 0.19% (13)
show - 0.19% (13)
acm - 0.19% (13)
ability - 0.19% (13)
interest - 0.19% (13)
point - 0.19% (13)
about - 0.19% (13)
space - 0.19% (13)
target - 0.17% (12)
learn - 0.17% (12)
more - 0.17% (12)
these - 0.17% (12)
under - 0.17% (12)
gestures - 0.17% (12)
main - 0.17% (12)
[video] - 0.17% (12)
edge - 0.17% (12)
ones - 0.17% (12)
care - 0.17% (12)
talk - 0.17% (12)
(chi - 0.17% (12)
usage - 0.16% (11)
escape - 0.16% (11)
international - 0.16% (11)
entry - 0.16% (11)
different - 0.16% (11)
slides - 0.16% (11)
low - 0.16% (11)
practice - 0.16% (11)
pair - 0.16% (11)
group - 0.16% (11)
people - 0.16% (11)
too - 0.16% (11)
however, - 0.16% (11)
active - 0.16% (11)
software - 0.16% (11)
communication - 0.16% (11)
2015 - 0.14% (10)
april - 0.14% (10)
understand - 0.14% (10)
selection - 0.14% (10)
tool - 0.14% (10)
pda - 0.14% (10)
know - 0.14% (10)
participants - 0.14% (10)
such - 0.14% (10)
have - 0.14% (10)
methods - 0.14% (10)
toss - 0.14% (10)
user's - 0.14% (10)
into - 0.14% (10)
technology - 0.14% (10)
uist - 0.14% (10)
author - 0.14% (10)
results - 0.14% (10)
reviews - 0.14% (10)
held - 0.14% (10)
its - 0.13% (9)
any - 0.13% (9)
handheld - 0.13% (9)
practices - 0.13% (9)
than - 0.13% (9)
limit - 0.13% (9)
interactive - 0.13% (9)
touch-screen - 0.13% (9)
sugimoto - 0.13% (9)
when - 0.13% (9)
paper - 0.13% (9)
children - 0.13% (9)
both - 0.13% (9)
university - 0.13% (9)
qwerty - 0.13% (9)
masanori - 0.13% (9)
may - 0.13% (9)
pattern - 0.13% (9)
ubiquitous - 0.13% (9)
performance - 0.13% (9)
learning - 0.12% (8)
tools - 0.12% (8)
often - 0.12% (8)
error - 0.12% (8)
typing - 0.12% (8)
elder - 0.12% (8)
difference - 0.12% (8)
maintain - 0.12% (8)
authoring - 0.12% (8)
targets - 0.12% (8)
accurate - 0.12% (8)
techniques - 0.12% (8)
provide - 0.12% (8)
2012 - 0.12% (8)
collaborative - 0.12% (8)
october - 0.12% (8)
environment - 0.12% (8)
content - 0.12% (8)
finger - 0.12% (8)
ubicomp - 0.12% (8)
compare - 0.12% (8)
studies - 0.12% (8)
menu - 0.12% (8)
transfer - 0.1% (7)
task - 0.1% (7)
september - 0.1% (7)
rehearsal - 0.1% (7)
museum - 0.1% (7)
layout - 0.1% (7)
found - 0.1% (7)
attention - 0.1% (7)
pdas - 0.1% (7)
location - 0.1% (7)
understanding - 0.1% (7)
manual - 0.1% (7)
each - 0.1% (7)
award - 0.1% (7)
new - 0.1% (7)
need - 0.1% (7)
2014), - 0.1% (7)
online - 0.1% (7)
object - 0.1% (7)
top - 0.1% (7)
evaluation - 0.1% (7)
tokyo - 0.1% (7)
wear - 0.1% (7)
off - 0.1% (7)
type - 0.1% (7)
development - 0.1% (7)
improve - 0.1% (7)
mode - 0.1% (7)
direct - 0.1% (7)
darren - 0.1% (7)
other - 0.1% (7)
musex - 0.1% (7)
yatani. - 0.1% (7)
foot - 0.1% (7)
patterns - 0.1% (7)
activities - 0.1% (7)
sensor - 0.1% (7)
reduce - 0.1% (7)
application - 0.09% (6)
most - 0.09% (6)
late - 0.09% (6)
give - 0.09% (6)
2014. - 0.09% (6)
reviewcollage - 0.09% (6)
associate - 0.09% (6)
investigate - 0.09% (6)
ways - 0.09% (6)
set - 0.09% (6)
differences - 0.09% (6)
data - 0.09% (6)
place - 0.09% (6)
context - 0.09% (6)
symposium - 0.09% (6)
position - 0.09% (6)
hiromichi - 0.09% (6)
build - 0.09% (6)
increase - 0.09% (6)
better - 0.09% (6)
chun - 0.09% (6)
level - 0.09% (6)
sustainability - 0.09% (6)
visually - 0.09% (6)
word - 0.09% (6)
phones - 0.09% (6)
collaboration - 0.09% (6)
difficult - 0.09% (6)
serve - 0.09% (6)
ieee - 0.09% (6)
diagram - 0.09% (6)
cover - 0.09% (6)
children's - 0.09% (6)
called - 0.09% (6)
ultrasonic - 0.09% (6)
transactions - 0.09% (6)
recognition - 0.09% (6)
peripheral - 0.09% (6)
panel - 0.09% (6)
wearable - 0.09% (6)
2005) - 0.09% (6)
nugu - 0.09% (6)
dynamic - 0.09% (6)
activity - 0.09% (6)
coordination - 0.09% (6)
toss-it - 0.09% (6)
fast - 0.09% (6)
risk - 0.07% (5)
pervasive - 0.07% (5)
exhibitions - 0.07% (5)
icon - 0.07% (5)
2005), - 0.07% (5)
significant - 0.07% (5)
talkzones - 0.07% (5)
stylus - 0.07% (5)
similar - 0.07% (5)
game - 0.07% (5)
young - 0.07% (5)
2009. - 0.07% (5)
hashizume. - 0.07% (5)
2005. - 0.07% (5)
reveal - 0.07% (5)
vol. - 0.07% (5)
relationship - 0.07% (5)
japan - 0.07% (5)
peak - 0.07% (5)
wpm - 0.07% (5)
minute - 0.07% (5)
speed - 0.07% (5)
china - 0.07% (5)
elders - 0.07% (5)
many - 0.07% (5)
area - 0.07% (5)
thumb - 0.07% (5)
mobilehci - 0.07% (5)
bodyscope - 0.07% (5)
spacesense - 0.07% (5)
mental - 0.07% (5)
traditional - 0.07% (5)
including - 0.07% (5)
single - 0.07% (5)
sight-free - 0.07% (5)
own - 0.07% (5)
2011. - 0.07% (5)
1line - 0.07% (5)
keys - 0.07% (5)
predict - 0.07% (5)
no. - 0.07% (5)
(uist - 0.07% (5)
detail - 0.07% (5)
cross - 0.07% (5)
focus - 0.07% (5)
semfeel - 0.07% (5)
science - 0.07% (5)
knowledge - 0.07% (5)
well - 0.07% (5)
developed - 0.07% (5)
animation - 0.07% (5)
engineering - 0.07% (5)
create - 0.07% (5)
uses - 0.07% (5)
entities - 0.07% (5)
only - 0.07% (5)
lee, - 0.07% (5)
ko, - 0.07% (5)
effective - 0.07% (5)
compared - 0.07% (5)
lie - 0.07% (5)
mention - 0.07% (5)
paper, - 0.07% (5)
sensing - 0.07% (5)
technologies - 0.07% (5)
limiting - 0.07% (5)
drawn - 0.07% (5)
intervention - 0.07% (5)
approach - 0.07% (5)
building - 0.07% (5)
multiple - 0.07% (5)
editing - 0.07% (5)
jun - 0.07% (5)
honorable - 0.07% (5)
interests - 0.07% (5)
2015. - 0.07% (5)
presentations - 0.07% (5)
winner - 0.07% (5)
presenting - 0.07% (5)
intuitive - 0.06% (4)
prototyping - 0.06% (4)
social - 0.06% (4)
provides - 0.06% (4)
generate - 0.06% (4)
interested - 0.06% (4)
hyperslides - 0.06% (4)
pairs - 0.06% (4)
dementia - 0.06% (4)
general - 0.06% (4)
professor - 0.06% (4)
management - 0.06% (4)
errors - 0.06% (4)
2013), - 0.06% (4)
2011), - 0.06% (4)
abstract - 0.06% (4)
sugimoto, - 0.06% (4)
escape-keyboard - 0.06% (4)
exploration - 0.06% (4)
pocket - 0.06% (4)
initiative - 0.06% (4)
photo - 0.06% (4)
extended - 0.06% (4)
keyboard, - 0.06% (4)
useful - 0.06% (4)
hand-drawn - 0.06% (4)
journal - 0.06% (4)
2013. - 0.06% (4)
aware - 0.06% (4)
supports - 0.06% (4)
sidepoint - 0.06% (4)
spotlight - 0.06% (4)
adjective-noun - 0.06% (4)
exist - 0.06% (4)
full - 0.06% (4)
local - 0.06% (4)
mark - 0.06% (4)
map - 0.06% (4)
supporting - 0.06% (4)
problem - 0.06% (4)
keyboard: - 0.06% (4)
menus - 0.06% (4)
tabletop - 0.06% (4)
stylus-based - 0.06% (4)
work, - 0.06% (4)
high - 0.06% (4)
fingers - 0.06% (4)
towards - 0.06% (4)
while - 0.06% (4)
significantly - 0.06% (4)
asia - 0.06% (4)
maintaining - 0.06% (4)
stationary - 0.06% (4)
used - 0.06% (4)
offer - 0.06% (4)
personal - 0.06% (4)
was - 0.06% (4)
self-regulation - 0.06% (4)
who - 0.06% (4)
qualitative - 0.06% (4)
computing, - 0.06% (4)
improving - 0.06% (4)
entertainment - 0.06% (4)
allow - 0.06% (4)
across - 0.06% (4)
immersive - 0.06% (4)
gestural - 0.06% (4)
them - 0.06% (4)
open - 0.06% (4)
source - 0.06% (4)
look - 0.06% (4)
product - 0.06% (4)
had - 0.06% (4)
device. - 0.06% (4)
student - 0.06% (4)
summarization - 0.06% (4)
then - 0.06% (4)
narrative - 0.06% (4)
greater - 0.06% (4)
identify - 0.06% (4)
potential - 0.06% (4)
could - 0.06% (4)
quickly - 0.06% (4)
overall - 0.06% (4)
surface - 0.06% (4)
addiction - 0.06% (4)
2010. - 0.06% (4)
program - 0.06% (4)
chi; - 0.06% (4)
very - 0.06% (4)
hold - 0.06% (4)
they - 0.06% (4)
even - 0.06% (4)
time, - 0.06% (4)
structured - 0.06% (4)
2010), - 0.06% (4)
chair - 0.06% (4)
turningpoint - 0.06% (4)
multiplayer - 0.04% (3)
mobility - 0.04% (3)
least - 0.04% (3)
"an - 0.04% (3)
space. - 0.04% (3)
existing - 0.04% (3)
two-handed - 0.04% (3)
2012), - 0.04% (3)
2012. - 0.04% (3)
tap - 0.04% (3)
classify - 0.04% (3)
june - 0.04% (3)
times - 0.04% (3)
pen+touch - 0.04% (3)
bill - 0.04% (3)
action, - 0.04% (3)
edge. - 0.04% (3)
efficient - 0.04% (3)
make - 0.04% (3)
increasing - 0.04% (3)
reviewer - 0.04% (3)
decision - 0.04% (3)
where - 0.04% (3)
machine - 0.04% (3)
consume - 0.04% (3)
record - 0.04% (3)
location-sensitive - 0.04% (3)
group-based - 0.04% (3)
acoustic - 0.04% (3)
day - 0.04% (3)
why - 0.04% (3)
edge, - 0.04% (3)
served - 0.04% (3)
whack-a-mole - 0.04% (3)
without - 0.04% (3)
respectively - 0.04% (3)
faster - 0.04% (3)
sensors - 0.04% (3)
faculty - 0.04% (3)
representing - 0.04% (3)
exploratory - 0.04% (3)
unimanual - 0.04% (3)
situated - 0.04% (3)
pie - 0.04% (3)
appear - 0.04% (3)
(mobilehci - 0.04% (3)
issue - 0.04% (3)
constraints - 0.04% (3)
remain - 0.04% (3)
animations - 0.04% (3)
use. - 0.04% (3)
tasks - 0.04% (3)
services - 0.04% (3)
confirm - 0.04% (3)
preliminary - 0.04% (3)
evaluate - 0.04% (3)
november - 0.04% (3)
produce - 0.04% (3)
siggraph - 0.04% (3)
impact - 0.04% (3)
students - 0.04% (3)
college - 0.04% (3)
example - 0.04% (3)
ph.d. - 0.04% (3)
cued - 0.04% (3)
geographical - 0.04% (3)
timing - 0.04% (3)
contributors - 0.04% (3)
impaired - 0.04% (3)
selected - 0.04% (3)
giving - 0.04% (3)
physical - 0.04% (3)
workload - 0.04% (3)
increases - 0.04% (3)
diagrams - 0.04% (3)
eight - 0.04% (3)
vibration - 0.04% (3)
distance - 0.04% (3)
channel - 0.04% (3)
studied - 0.04% (3)
places - 0.04% (3)
objects - 0.04% (3)
shows - 0.04% (3)
repetition - 0.04% (3)
accurately - 0.04% (3)
hong - 0.04% (3)
specifically - 0.04% (3)
experience - 0.04% (3)
small - 0.04% (3)
chung, - 0.04% (3)
laboratory - 0.04% (3)
conducted - 0.04% (3)
theory - 0.04% (3)
computers - 0.04% (3)
located - 0.04% (3)
tamura, - 0.04% (3)
emerging - 0.04% (3)
musex: - 0.04% (3)
been - 0.04% (3)
important - 0.04% (3)
narrative-driven - 0.04% (3)
cognitive - 0.04% (3)
febrary - 0.04% (3)
koiti - 0.04% (3)
swing - 0.04% (3)
designed - 0.04% (3)
needs - 0.04% (3)
devices. - 0.04% (3)
comparison - 0.04% (3)
information. - 0.04% (3)
particularly - 0.04% (3)
semantic - 0.04% (3)
devices" - 0.04% (3)
together - 0.04% (3)
recognize - 0.04% (3)
past - 0.04% (3)
applications - 0.04% (3)
drawing - 0.04% (3)
condition - 0.04% (3)
quality - 0.04% (3)
presenters - 0.04% (3)
received - 0.04% (3)
canvas - 0.04% (3)
thus - 0.04% (3)
interview - 0.04% (3)
analysis - 0.04% (3)
improved - 0.04% (3)
contents - 0.04% (3)
interfaces - 0.04% (3)
ieice - 0.04% (3)
sketches - 0.04% (3)
pitchperfect - 0.04% (3)
abstracts - 0.04% (3)
nikola - 0.04% (3)
2004. - 0.04% (3)
additional - 0.04% (3)
showed - 0.04% (3)
interactions - 0.04% (3)
life - 0.04% (3)
phase - 0.04% (3)
caregivers - 0.04% (3)
use, - 0.04% (3)
positioning - 0.04% (3)
though - 0.04% (3)
fusako - 0.04% (3)
challenging - 0.04% (3)
effects - 0.04% (3)
institutional - 0.04% (3)
staff - 0.04% (3)
workshop - 0.04% (3)
gain - 0.04% (3)
haptics - 0.04% (3)
user-generated - 0.04% (3)
reveals - 0.04% (3)
upon - 0.04% (3)
planning - 0.04% (3)
2009), - 0.04% (3)
accordance - 0.04% (3)
smartphones - 0.04% (3)
march - 0.04% (3)
school - 0.04% (3)
panels - 0.04% (3)
toss-it: - 0.04% (3)
groups - 0.04% (3)
best - 0.04% (3)
integrated - 0.04% (3)
(2014) - 0.04% (3)
rehearse - 0.04% (3)
explore - 0.04% (3)
less - 0.04% (3)
microsoft - 0.04% (3)
among - 0.04% (3)
committee - 0.04% (3)
human-computer - 0.04% (3)
because - 0.04% (3)
some - 0.04% (3)
studies, - 0.04% (3)
computing. - 0.04% (3)
known - 0.04% (3)
reach - 0.04% (3)
yang, - 0.04% (3)
buxton. - 0.03% (2)
andy - 0.03% (2)
transferability - 0.03% (2)
simultaneous - 0.03% (2)
interesting - 0.03% (2)
deskterity - 0.03% (2)
emphasis - 0.03% (2)
benko, - 0.03% (2)
jenny - 0.03% (2)
statistics - 0.03% (2)
hrvoje - 0.03% (2)
rodenhouse, - 0.03% (2)
36, - 0.03% (2)
nicole - 0.03% (2)
coddington, - 0.03% (2)
semfeel: - 0.03% (2)
wilson, - 0.03% (2)
possible - 0.03% (2)
li-yi - 0.03% (2)
eunyoung - 0.03% (2)
pahud, - 0.03% (2)
carlos - 0.03% (2)
jensen - 0.03% (2)
"understanding - 0.03% (2)
sustainability: - 0.03% (2)
influence - 0.03% (2)
takaaki - 0.03% (2)
linguistics - 0.03% (2)
2007. - 0.03% (2)
course, - 0.03% (2)
pdas" - 0.03% (2)
"musex: - 0.03% (2)
kusunoki. - 0.03% (2)
questions - 0.03% (2)
(also - 0.03% (2)
affiliated - 0.03% (2)
informatics - 0.03% (2)
pdas. - 0.03% (2)
interfaculty - 0.03% (2)
education - 0.03% (2)
pi_book - 0.03% (2)
enjoyable - 0.03% (2)
december - 0.03% (2)
2004), - 0.03% (2)
actions" - 0.03% (2)
"toss-it: - 0.03% (2)
hiroki, - 0.03% (2)
keiichi - 0.03% (2)
toronto - 0.03% (2)
second - 0.03% (2)
ubicomp; - 0.03% (2)
fully - 0.03% (2)
range - 0.03% (2)
(2013), - 0.03% (2)
(2011 - 0.03% (2)
awards - 0.03% (2)
intelligent - 0.03% (2)
joined - 0.03% (2)
2003 - 0.03% (2)
productivity - 0.03% (2)
hci, - 0.03% (2)
department - 0.03% (2)
uist; - 0.03% (2)
ijcai - 0.03% (2)
electrical - 0.03% (2)
society - 0.03% (2)
journal) - 0.03% (2)
conference; - 0.03% (2)
(elsevier) - 0.03% (2)
chair: - 0.03% (2)
(eeis) - 0.03% (2)
world - 0.03% (2)
electronic - 0.03% (2)
sugimoto. - 0.03% (2)
roughly - 0.03% (2)
biography - 0.03% (2)
tokyo, - 0.03% (2)
scenarios - 0.03% (2)
algorithm - 0.03% (2)
iis-lab.org - 0.03% (2)
technique, - 0.03% (2)
wiki - 0.03% (2)
six - 0.03% (2)
professional - 0.03% (2)
address - 0.03% (2)
digital - 0.03% (2)
resolution - 0.03% (2)
thumbs - 0.03% (2)
screens - 0.03% (2)
visually-cued - 0.03% (2)
escape: - 0.03% (2)
january - 0.03% (2)
photos - 0.03% (2)
japan, - 0.03% (2)
every - 0.03% (2)
field - 0.03% (2)
assistant - 0.03% (2)
degree - 0.03% (2)
environment" - 0.03% (2)
experiment - 0.03% (2)
(hci) - 0.03% (2)
packet - 0.03% (2)
sync - 0.03% (2)
relative - 0.03% (2)
computational - 0.03% (2)
12, - 0.03% (2)
hinckley, - 0.03% (2)
deeply - 0.03% (2)
motion - 0.03% (2)
quantitative - 0.03% (2)
arhunter - 0.03% (2)
statistical - 0.03% (2)
take - 0.03% (2)
mobility. - 0.03% (2)
settings. - 0.03% (2)
contact - 0.03% (2)
hand. - 0.03% (2)
dominant - 0.03% (2)
michel - 0.03% (2)
enhance - 0.03% (2)
ken - 0.03% (2)
managing - 0.03% (2)
transform - 0.03% (2)
types - 0.03% (2)
associated - 0.03% (2)
benefits - 0.03% (2)
examine - 0.03% (2)
just - 0.03% (2)
satisfy - 0.03% (2)
amount - 0.03% (2)
sidepoint: - 0.03% (2)
perceived - 0.03% (2)
wpm. - 0.03% (2)
there - 0.03% (2)
expert - 0.03% (2)
minsam - 0.03% (2)
subin - 0.03% (2)
sessions. - 0.03% (2)
after - 0.03% (2)
joonwon - 0.03% (2)
uichin - 0.03% (2)
junehwa - 0.03% (2)
kyong-mee - 0.03% (2)
gesture. - 0.03% (2)
flick - 0.03% (2)
hyperslides: - 0.03% (2)
deliver - 0.03% (2)
supported - 0.03% (2)
strategies - 0.03% (2)
topic - 0.03% (2)
guide - 0.03% (2)
helpful - 0.03% (2)
individual - 0.03% (2)
beijing - 0.03% (2)
like - 0.03% (2)
impairments - 0.03% (2)
home - 0.03% (2)
long-term - 0.03% (2)
number - 0.03% (2)
working - 0.03% (2)
prototype - 0.03% (2)
interventions - 0.03% (2)
ranging - 0.03% (2)
increased - 0.03% (2)
control - 0.03% (2)
slides, - 0.03% (2)
linear - 0.03% (2)
creates - 0.03% (2)
helps - 0.03% (2)
minimalist - 0.03% (2)
language - 0.03% (2)
due - 0.03% (2)
performing - 0.03% (2)
cooperative - 0.03% (2)
shared - 0.03% (2)
smartphones: - 0.03% (2)
features - 0.03% (2)
content. - 0.03% (2)
tends - 0.03% (2)
within - 0.03% (2)
analyze - 0.03% (2)
scale - 0.03% (2)
non-risk - 0.03% (2)
addiction. - 0.03% (2)
increasingly - 0.03% (2)
overuse - 0.03% (2)
hooked - 0.03% (2)
behavior - 0.03% (2)
short - 0.03% (2)
talkzones: - 0.03% (2)
advance - 0.03% (2)
encourage - 0.03% (2)
section-based - 0.03% (2)
talks - 0.03% (2)
overrun - 0.03% (2)
mean - 0.03% (2)
rehearsed - 0.03% (2)
process - 0.03% (2)
whole - 0.03% (2)
detailed - 0.03% (2)
turningpoint: - 0.03% (2)
(cscw - 0.03% (2)
motivate - 0.03% (2)
reviewcollage: - 0.03% (2)
allows - 0.03% (2)
always - 0.03% (2)
decide - 0.03% (2)
traditionally - 0.03% (2)
couple - 0.03% (2)
one-handed - 0.03% (2)
entities. - 0.03% (2)
escape-keyboard: - 0.03% (2)
read - 0.03% (2)
five - 0.03% (2)
modern - 0.03% (2)
webpages - 0.03% (2)
performed - 0.03% (2)
yet - 0.03% (2)
oral - 0.03% (2)
preparation - 0.03% (2)
pitchperfect: - 0.03% (2)
save - 0.03% (2)
pairs, - 0.03% (2)
focusing - 0.03% (2)
probe - 0.03% (2)
expense - 0.03% (2)
recorded - 0.03% (2)
ma, - 0.03% (2)
would - 0.03% (2)
native - 0.03% (2)
spotlight: - 0.03% (2)
might - 0.03% (2)
locations - 0.03% (2)
offers - 0.03% (2)
suggestions - 0.03% (2)
predictions - 0.03% (2)
creating - 0.03% (2)
three - 0.03% (2)
keyboard. - 0.03% (2)
ipad - 0.03% (2)
(in - 0.03% (2)
what - 0.03% (2)
tall - 0.03% (2)
pixels - 0.03% (2)
diminish - 0.03% (2)
portion - 0.03% (2)
large - 0.03% (2)
frames - 0.03% (2)
current - 0.03% (2)
desirable - 0.03% (2)
results. - 0.03% (2)
simple - 0.03% (2)
gestures. - 0.03% (2)
summarizing - 0.03% (2)
predicts - 0.03% (2)
surfaces - 0.03% (2)
describe - 0.03% (2)
inputs - 0.03% (2)
greatly - 0.03% (2)
autocomplete - 0.03% (2)
multimodal - 0.03% (2)
pen, - 0.03% (2)
unimodal - 0.03% (2)
centered - 0.03% (2)
people's - 0.03% (2)
observe - 0.03% (2)
major - 0.03% (2)
approximately - 0.03% (2)
glean - 0.03% (2)
demonstrate - 0.03% (2)
users. - 0.03% (2)
"review - 0.03% (2)
joint - 0.03% (2)
trusty, - 0.03% (2)
andrew - 0.03% (2)
novati, - 0.03% (2)
michael - 0.03% (2)
options - 0.03% (2)
required - 0.03% (2)
iterative - 0.03% (2)
120 - 0.03% (2)
dearman, - 0.03% (2)
"information - 0.03% (2)
four - 0.03% (2)
motors - 0.03% (2)
during - 0.03% (2)
built - 0.03% (2)
same - 0.03% (2)
maps - 0.03% (2)
braille - 0.03% (2)
stylesnap - 0.03% (2)
spacesense: - 0.03% (2)
(ubicomp - 0.03% (2)
accuracy. - 0.03% (2)
flashformat - 0.03% (2)
direction - 0.03% (2)
study, - 0.03% (2)
were - 0.03% (2)
speaking, - 0.03% (2)
drinking, - 0.03% (2)
eating, - 0.03% (2)
systems. - 0.03% (2)
2015), - 0.03% (2)
nugu: - 0.03% (2)
bodyscope: - 0.03% (2)
571 - 0.03% (2)
2013, - 0.03% (2)
attached - 0.03% (2)
relationships - 0.03% (2)
david - 0.03% (2)
complement - 0.03% (2)
li, - 0.03% (2)
frank - 0.03% (2)
results, - 0.03% (2)
multi-touch - 0.03% (2)
computers, - 0.03% (2)
touchscreen - 0.03% (2)
user. - 0.03% (2)
users?f - 0.03% (2)
right - 0.03% (2)
mixed-initiative - 0.03% (2)
multi-finger - 0.03% (2)
approaches - 0.03% (2)
consistency - 0.03% (2)
occluded, - 0.03% (2)
highlight - 0.03% (2)
workspace. - 0.03% (2)
devices, - 0.03% (2)
become - 0.03% (2)
slideware - 0.03% (2)
investigating - 0.03% (2)
banovic, - 0.03% (2)
alignment - 0.03% (2)
feedback. - 0.03% (2)
commercial - 0.03% (2)
mobisys - 0.03% (2)
the publication - 1.3% (90)
publication list - 1.3% (90)
see the - 0.86% (60)
of the - 0.72% (50)
koji yatani - 0.59% (41)
koji yatani, - 0.45% (31)
list see - 0.43% (30)
list hide - 0.43% (30)
hide the - 0.43% (30)
proceedings of - 0.42% (29)
in proceedings - 0.42% (29)
the use - 0.39% (27)
the user - 0.37% (26)
in the - 0.35% (24)
conference on - 0.33% (23)
on human - 0.22% (15)
on the - 0.22% (15)
n. truong - 0.22% (15)
tactile feedback - 0.22% (15)
mobile device - 0.22% (15)
khai n. - 0.22% (15)
and khai - 0.2% (14)
computing systems - 0.19% (13)
[publisher] see - 0.19% (13)
n. truong. - 0.19% (13)
factors in - 0.17% (12)
systems (chi - 0.17% (12)
sigchi conference - 0.17% (12)
the sigchi - 0.17% (12)
in computing - 0.17% (12)
human factors - 0.17% (12)
mobile devices - 0.17% (12)
[pdf] [publisher] - 0.17% (12)
[pdf] [video] - 0.17% (12)
text entry - 0.16% (11)
the acm - 0.16% (11)
we present - 0.16% (11)
to the - 0.16% (11)
in this - 0.16% (11)
[pdf] see - 0.16% (11)
yatani, and - 0.14% (10)
for mobile - 0.14% (10)
user interface - 0.14% (10)
mobile phone - 0.14% (10)
mobile touch-screen - 0.13% (9)
and mobile - 0.13% (9)
entry method - 0.13% (9)
at the - 0.13% (9)
university of - 0.13% (9)
the user's - 0.13% (9)
using a - 0.13% (9)
masanori sugimoto - 0.13% (9)
ubiquitous computing - 0.13% (9)
a mobile - 0.13% (9)
with the - 0.12% (8)
international conference - 0.12% (8)
our system - 0.12% (8)
user study - 0.12% (8)
with a - 0.1% (7)
, september - 0.1% (7)
a system - 0.1% (7)
a user - 0.1% (7)
touch-screen devices - 0.1% (7)
and technology - 0.09% (6)
on mobile - 0.09% (6)
we also - 0.09% (6)
such as - 0.09% (6)
spatial tactile - 0.09% (6)
design of - 0.09% (6)
on user - 0.09% (6)
koji yatani. - 0.09% (6)
of tokyo - 0.09% (6)
symposium on - 0.09% (6)
[video] see - 0.09% (6)
show that - 0.09% (6)
qwerty keyboard - 0.09% (6)
entry methods - 0.09% (6)
techniques for - 0.09% (6)
technology (uist - 0.07% (5)
this work - 0.07% (5)
handheld devices - 0.07% (5)
software and - 0.07% (5)
and hiromichi - 0.07% (5)
[video] [publisher] - 0.07% (5)
transactions on - 0.07% (5)
an evaluation - 0.07% (5)
2014), pp. - 0.07% (5)
for presentation - 0.07% (5)
visual and - 0.07% (5)
the university - 0.07% (5)
from the - 0.07% (5)
honorable mention - 0.07% (5)
award winner - 0.07% (5)
information transfer - 0.07% (5)
transfer techniques - 0.07% (5)
2005. [pdf] - 0.07% (5)
presentation authoring - 0.07% (5)
to help - 0.07% (5)
as the - 0.07% (5)
and the - 0.07% (5)
mention award - 0.07% (5)
we show - 0.07% (5)
this paper, - 0.07% (5)
hiromichi hashizume. - 0.07% (5)
however, the - 0.07% (5)
through a - 0.07% (5)
the 1line - 0.07% (5)
on smartphone - 0.07% (5)
based on - 0.07% (5)
technique using - 0.07% (5)
a museum - 0.07% (5)
found that - 0.07% (5)
and koji - 0.07% (5)
1line keyboard - 0.07% (5)
interface software - 0.07% (5)
we investigate - 0.07% (5)
winner [pdf] - 0.07% (5)
paper, we - 0.07% (5)
acm symposium - 0.07% (5)
2009. [pdf] - 0.07% (5)
the pen - 0.07% (5)
on handheld - 0.06% (4)
[publisher] koji - 0.06% (4)
peripheral knowledge - 0.06% (4)
is not - 0.06% (4)
handheld system - 0.06% (4)
our results - 0.06% (4)
and practice - 0.06% (4)
risk group - 0.06% (4)
input in - 0.06% (4)
evaluation of - 0.06% (4)
gestural input - 0.06% (4)
target selection - 0.06% (4)
ways that - 0.06% (4)
present the - 0.06% (4)
activity recognition - 0.06% (4)
that participants - 0.06% (4)
stylus-based text - 0.06% (4)
error rate - 0.06% (4)
able to - 0.06% (4)
knowledge panel - 0.06% (4)
in ways - 0.06% (4)
2013. [pdf] - 0.06% (4)
stationary and - 0.06% (4)
the design - 0.06% (4)
with mobile - 0.06% (4)
2014. honorable - 0.06% (4)
the screen - 0.06% (4)
associate professor - 0.06% (4)
human computer - 0.06% (4)
can be - 0.06% (4)
to support - 0.06% (4)
mobile computing - 0.06% (4)
system for - 0.06% (4)
a single - 0.06% (4)
our user - 0.06% (4)
spatial coordination - 0.06% (4)
work, we - 0.06% (4)
and tactile - 0.06% (4)
open source - 0.06% (4)
collaborative learning - 0.06% (4)
masanori sugimoto, - 0.06% (4)
2011), pp. - 0.06% (4)
this work, - 0.06% (4)
help the - 0.06% (4)
sugimoto, and - 0.06% (4)
research i - 0.06% (4)
sigchi international - 0.06% (4)
interface for - 0.06% (4)
museum with - 0.06% (4)
keyboard: a - 0.06% (4)
2010. [pdf] - 0.06% (4)
2010), pp. - 0.06% (4)
for the - 0.06% (4)
foot gestures - 0.06% (4)
smartphone use - 0.06% (4)
interested in - 0.06% (4)
in extended - 0.06% (4)
acm sigchi - 0.06% (4)
online review - 0.06% (4)
2005), pp. - 0.06% (4)
intuitive information - 0.06% (4)
adjective-noun word - 0.06% (4)
in their - 0.06% (4)
that users - 0.06% (4)
in china - 0.04% (3)
a qwerty - 0.04% (3)
2009), pp. - 0.04% (3)
layout in - 0.04% (3)
conducted a - 0.04% (3)
uses a - 0.04% (3)
reviews using - 0.04% (3)
+ touch - 0.04% (3)
user's attention - 0.04% (3)
extended abstracts - 0.04% (3)
methods on - 0.04% (3)
of stylus-based - 0.04% (3)
presentation slide - 0.04% (3)
showed that - 0.04% (3)
2011. [pdf] - 0.04% (3)
interact with - 0.04% (3)
and can - 0.04% (3)
dynamic presentation - 0.04% (3)
word pairs - 0.04% (3)
using adjective-noun - 0.04% (3)
user-generated reviews - 0.04% (3)
a similar - 0.04% (3)
geographical information - 0.04% (3)
effects of - 0.04% (3)
on ubiquitous - 0.04% (3)
coordination in - 0.04% (3)
on spatial - 0.04% (3)
the spatial - 0.04% (3)
and maintain - 0.04% (3)
acoustic sensor - 0.04% (3)
for activity - 0.04% (3)
study shows - 0.04% (3)
2012), pp. - 0.04% (3)
and practices - 0.04% (3)
2012. [pdf] - 0.04% (3)
maintain the - 0.04% (3)
representing geographical - 0.04% (3)
visually impaired - 0.04% (3)
mobile phones - 0.04% (3)
local constraints - 0.04% (3)
spatial relationship - 0.04% (3)
an area - 0.04% (3)
collaborative handheld - 0.04% (3)
feedback can - 0.04% (3)
shows that - 0.04% (3)
with an - 0.04% (3)
care staff - 0.04% (3)
single line - 0.04% (3)
can reduce - 0.04% (3)
qwerty layout - 0.04% (3)
can help - 0.04% (3)
and why - 0.04% (3)
october 2009. - 0.04% (3)
performance of - 0.04% (3)
contributors use - 0.04% (3)
how and - 0.04% (3)
wearable acoustic - 0.04% (3)
use diagrams - 0.04% (3)
the most - 0.04% (3)
in collaborative - 0.04% (3)
we developed - 0.04% (3)
a wearable - 0.04% (3)
communication and - 0.04% (3)
selection technique - 0.04% (3)
design and - 0.04% (3)
pen to - 0.04% (3)
phase accordance - 0.04% (3)
mobile interface - 0.04% (3)
reduce the - 0.04% (3)
koiti tamura, - 0.04% (3)
interaction with - 0.04% (3)
& services - 0.04% (3)
toss a - 0.04% (3)
to other - 0.04% (3)
study of - 0.04% (3)
as well - 0.04% (3)
yatani, koiti - 0.04% (3)
and darren - 0.04% (3)
support for - 0.04% (3)
the communication - 0.04% (3)
exploratory study - 0.04% (3)
college students - 0.04% (3)
accordance method - 0.04% (3)
positioning technique - 0.04% (3)
usage patterns - 0.04% (3)
compared to - 0.04% (3)
systems and - 0.04% (3)
abstracts of - 0.04% (3)
relationship between - 0.04% (3)
siggraph asia - 0.04% (3)
school of - 0.04% (3)
from university - 0.04% (3)
research interests - 0.04% (3)
lie in - 0.04% (3)
ieice transactions - 0.04% (3)
of engineering - 0.04% (3)
asia 2015 - 0.04% (3)
supporting children's - 0.04% (3)
musex: a - 0.04% (3)
to understand - 0.04% (3)
on computer - 0.04% (3)
approach to - 0.04% (3)
darren edge, - 0.04% (3)
2015. [pdf] - 0.04% (3)
group-based intervention - 0.04% (3)
app for - 0.04% (3)
improving self-regulation - 0.04% (3)
2004. [pdf] - 0.04% (3)
workshop on - 0.04% (3)
acm conference - 0.04% (3)
location-sensitive and - 0.04% (3)
information about - 0.04% (3)
(chi 2014), - 0.04% (3)
whack-a-mole game - 0.04% (3)
an exploratory - 0.04% (3)
and immersive - 0.04% (3)
our qualitative - 0.04% (3)
devices" in - 0.04% (3)
in slide - 0.04% (3)
that are - 0.04% (3)
yatani, masanori - 0.04% (3)
5, no. - 0.04% (3)
sugimoto and - 0.04% (3)
that the - 0.04% (3)
a peripheral - 0.04% (3)
using gestural - 0.04% (3)
a location-sensitive - 0.04% (3)
truong. "an - 0.04% (3)
yatani and - 0.04% (3)
game using - 0.04% (3)
method for - 0.04% (3)
the potential - 0.04% (3)
april 2014. - 0.04% (3)
a multiplayer - 0.04% (3)
contents on - 0.03% (2)
user interact - 0.03% (2)
a target - 0.03% (2)
through two - 0.03% (2)
the second - 0.03% (2)
mobile settings. - 0.03% (2)
pdas" in - 0.03% (2)
using visually-cued - 0.03% (2)
computing, vol. - 0.03% (2)
our study - 0.03% (2)
"musex: a - 0.03% (2)
fusako kusunoki. - 0.03% (2)
with musex - 0.03% (2)
in our - 0.03% (2)
masanori sugimoto. - 0.03% (2)
ieee workshop - 0.03% (2)
ten different - 0.03% (2)
interfaces on - 0.03% (2)
in computer - 0.03% (2)
at approximately - 0.03% (2)
top to - 0.03% (2)
multiple vibration - 0.03% (2)
professional activities - 0.03% (2)
and bill - 0.03% (2)
focus on - 0.03% (2)
ken hinckley, - 0.03% (2)
a program - 0.03% (2)
served as - 0.03% (2)
awards at - 0.03% (2)
best paper - 0.03% (2)
-- march - 0.03% (2)
michel pahud, - 0.03% (2)
nicole coddington, - 0.03% (2)
jenny rodenhouse, - 0.03% (2)
the field - 0.03% (2)
andy wilson, - 0.03% (2)
hrvoje benko, - 0.03% (2)
microsoft research - 0.03% (2)
computer science - 0.03% (2)
ubicomp 2015 - 0.03% (2)
of electrical - 0.03% (2)
or from - 0.03% (2)
with semantic - 0.03% (2)
program committee - 0.03% (2)
journal of - 0.03% (2)
to look - 0.03% (2)
is that - 0.03% (2)
feedback for - 0.03% (2)
quantitative and - 0.03% (2)
semantic tactile - 0.03% (2)
engineering and - 0.03% (2)
digital assistant - 0.03% (2)
interface with - 0.03% (2)
information systems - 0.03% (2)
mobile and - 0.03% (2)
about exhibitions - 0.03% (2)
list an - 0.03% (2)
touch-screen devices" - 0.03% (2)
influence of - 0.03% (2)
eunyoung chung, - 0.03% (2)
and location - 0.03% (2)
mobile devices, - 0.03% (2)
carlos jensen - 0.03% (2)
escape: a - 0.03% (2)
using ultrasonic - 0.03% (2)
understanding mobile - 0.03% (2)
and masanori - 0.03% (2)
less than - 0.03% (2)
phone situated - 0.03% (2)
pp. 46 - 0.03% (2)
sustainability: the - 0.03% (2)
the technique - 0.03% (2)
environment" in - 0.03% (2)
the ieee - 0.03% (2)
on transferability - 0.03% (2)
no. 1, - 0.03% (2)
which is - 0.03% (2)
of international - 0.03% (2)
are the - 0.03% (2)
phone sustainability - 0.03% (2)
ultrasonic phase - 0.03% (2)
using the - 0.03% (2)
pervasive computing - 0.03% (2)
september 2005. - 0.03% (2)
and accurate - 0.03% (2)
immersive environment" - 0.03% (2)
impact of - 0.03% (2)
visually-cued gestures - 0.03% (2)
occluded, and - 0.03% (2)
in different - 0.03% (2)
source software - 0.03% (2)
additional contents - 0.03% (2)
devices in - 0.03% (2)
and at - 0.03% (2)
system in - 0.03% (2)
interactive and - 0.03% (2)
(uist 2009), - 0.03% (2)
2004), pp. - 0.03% (2)
list understanding - 0.03% (2)
devices by - 0.03% (2)
why open - 0.03% (2)
source contributors - 0.03% (2)
"toss-it: intuitive - 0.03% (2)
hiroki, masanori - 0.03% (2)
and commercial - 0.03% (2)
chung, carlos - 0.03% (2)
tamura, keiichi - 0.03% (2)
oss development - 0.03% (2)
1, pp. - 0.03% (2)
diagrams in - 0.03% (2)
we explore - 0.03% (2)
position and - 0.03% (2)
differences in - 0.03% (2)
swing actions" - 0.03% (2)
toss and - 0.03% (2)
hashizume. "toss-it: - 0.03% (2)
keiichi hiroki, - 0.03% (2)
and thumb - 0.03% (2)
in oss - 0.03% (2)
in stationary - 0.03% (2)
spatial relationships - 0.03% (2)
a photo - 0.03% (2)
ability to - 0.03% (2)
the impact - 0.03% (2)
overuse among - 0.03% (2)
smartphones: an - 0.03% (2)
hooked on - 0.03% (2)
section-based time - 0.03% (2)
a mean - 0.03% (2)
who used - 0.03% (2)
12 participants - 0.03% (2)
the need - 0.03% (2)
support the - 0.03% (2)
for presentations - 0.03% (2)
smartphone usage - 0.03% (2)
time support - 0.03% (2)
talkzones: section-based - 0.03% (2)
september 2014. - 0.03% (2)
(mobilehci 2014), - 0.03% (2)
could be - 0.03% (2)
couple of - 0.03% (2)
two entities - 0.03% (2)
can support - 0.03% (2)
that reviewcollage - 0.03% (2)
differences between - 0.03% (2)
mobile device. - 0.03% (2)
to identify - 0.03% (2)
addiction. we - 0.03% (2)
this is - 0.03% (2)
presentation preparation - 0.03% (2)
and mental - 0.03% (2)
methods traditionally - 0.03% (2)
sight-free one-handed - 0.03% (2)
escape-keyboard: a - 0.03% (2)
structured presentation - 0.03% (2)
environment for - 0.03% (2)
integrated rehearsal - 0.03% (2)
understand the - 0.03% (2)
study to - 0.03% (2)
we conducted - 0.03% (2)
for structured - 0.03% (2)
the participants - 0.03% (2)
rehearsal environment - 0.03% (2)
pitchperfect: integrated - 0.03% (2)
attention in - 0.03% (2)
use of - 0.03% (2)
expense of - 0.03% (2)
is often - 0.03% (2)
narrative-driven presentation - 0.03% (2)
content. we - 0.03% (2)
and differences - 0.03% (2)
we then - 0.03% (2)
smartphone addiction. - 0.03% (2)
she is - 0.03% (2)
a couple - 0.03% (2)
the thumb - 0.03% (2)
human-computer interaction - 0.03% (2)
for hci - 0.03% (2)
at siggraph - 0.03% (2)
look at - 0.03% (2)
you are - 0.03% (2)
ubiquitous computing, - 0.03% (2)
interactive systems - 0.03% (2)
computational linguistics - 0.03% (2)
sensing technologies - 0.03% (2)
ubiquitous computing. - 0.03% (2)
(hci) and - 0.03% (2)
under the - 0.03% (2)
emphasis on - 0.03% (2)
of toronto - 0.03% (2)
initiative in - 0.03% (2)
course, interfaculty - 0.03% (2)
and informatics - 0.03% (2)
emerging design - 0.03% (2)
affiliated with - 0.03% (2)
systems (eeis) - 0.03% (2)
and information - 0.03% (2)
electrical engineering - 0.03% (2)
department of - 0.03% (2)
systems laboratory - 0.03% (2)
hci research - 0.03% (2)
autocomplete hand-drawn - 0.03% (2)
a product - 0.03% (2)
ranging from - 0.03% (2)
using online - 0.03% (2)
direct comparison - 0.03% (2)
supported cooperative - 0.03% (2)
effective for - 0.03% (2)
compared with - 0.03% (2)
study (n - 0.03% (2)
theory and - 0.03% (2)
maintaining their - 0.03% (2)
also found - 0.03% (2)
out of - 0.03% (2)
management strategies - 0.03% (2)
and communication - 0.03% (2)
of limiting - 0.03% (2)
nugu: a - 0.03% (2)
2015), pp. - 0.03% (2)
both use - 0.03% (2)
the same - 0.03% (2)
repetition of - 0.03% (2)
alignment and - 0.03% (2)
it can - 0.03% (2)
we evaluate - 0.03% (2)
when users - 0.03% (2)
system to - 0.03% (2)
the device - 0.03% (2)
rate in - 0.03% (2)
photo and - 0.03% (2)
the user. - 0.03% (2)
these devices. - 0.03% (2)
experience on - 0.03% (2)
diminish the - 0.03% (2)
portion of - 0.03% (2)
a large - 0.03% (2)
david dearman, - 0.03% (2)
yat li, - 0.03% (2)
frank chun - 0.03% (2)
results, we - 0.03% (2)
how to - 0.03% (2)
keyboard and - 0.03% (2)
into a - 0.03% (2)
pie menu - 0.03% (2)
unimanual multi-finger - 0.03% (2)
feedback on - 0.03% (2)
of visual - 0.03% (2)
using only - 0.03% (2)
systems using - 0.03% (2)
space and - 0.03% (2)
of information - 0.03% (2)
user studies, - 0.03% (2)
used to - 0.03% (2)
visual feedback - 0.03% (2)
that is - 0.03% (2)
of keys - 0.03% (2)
using spatial - 0.03% (2)
michael novati, - 0.03% (2)
hold a - 0.03% (2)
dearman, koji - 0.03% (2)
that our - 0.03% (2)
machine learning - 0.03% (2)
to learn - 0.03% (2)
we study - 0.03% (2)
can diminish - 0.03% (2)
paper award - 0.03% (2)
summarizing user-generated - 0.03% (2)
andrew trusty, - 0.03% (2)
participants could - 0.03% (2)
user can - 0.03% (2)
to perform - 0.03% (2)
explore the - 0.03% (2)
to quickly - 0.03% (2)
provides a - 0.03% (2)
difficult to - 0.03% (2)
the reviews - 0.03% (2)
for summarizing - 0.03% (2)
spotlight: a - 0.03% (2)
list review - 0.03% (2)
1line keyboard: - 0.03% (2)
and tap - 0.03% (2)
devices have - 0.03% (2)
impaired people - 0.03% (2)
typing sessions. - 0.03% (2)
to satisfy - 0.03% (2)
elders with - 0.03% (2)
interventions and - 0.03% (2)
due to - 0.03% (2)
and tools - 0.03% (2)
may 2013. - 0.03% (2)
(chi 2013), - 0.03% (2)
the expense - 0.03% (2)
practices with - 0.03% (2)
2013), pp. - 0.03% (2)
panel for - 0.03% (2)
associated with - 0.03% (2)
into the - 0.03% (2)
aware of - 0.03% (2)
but also - 0.03% (2)
the information - 0.03% (2)
slide authoring - 0.03% (2)
3, pp. - 0.03% (2)
of mobile - 0.03% (2)
international journal - 0.03% (2)
of our - 0.03% (2)
analysis of - 0.03% (2)
led to - 0.03% (2)
the keyboard - 0.03% (2)
study with - 0.03% (2)
we found - 0.03% (2)
to visually - 0.03% (2)
people using - 0.03% (2)
nikola banovic, - 0.03% (2)
relationships between - 0.03% (2)
interactive intelligent - 0.03% (2)
distance and - 0.03% (2)
about the - 0.03% (2)
of interest - 0.03% (2)
set of - 0.03% (2)
users to - 0.03% (2)
braille maps - 0.03% (2)
people with - 0.03% (2)
information to - 0.03% (2)
more about - 0.03% (2)
computing (ubicomp - 0.03% (2)
drinking, speaking, - 0.03% (2)
eating, drinking, - 0.03% (2)
wearable sensing - 0.03% (2)
the development - 0.03% (2)
sensor for - 0.03% (2)
bodyscope: a - 0.03% (2)
571 -- - 0.03% (2)
elders to - 0.03% (2)
and that - 0.03% (2)
and personal - 0.03% (2)
he also - 0.03% (2)
the publication list - 1.3% (90)
see the publication - 0.86% (60)
publication list hide - 0.43% (30)
publication list see - 0.43% (30)
hide the publication - 0.43% (30)
list hide the - 0.43% (30)
list see the - 0.43% (30)
in proceedings of - 0.42% (29)
proceedings of the - 0.39% (27)
conference on human - 0.22% (15)
[publisher] see the - 0.19% (13)
khai n. truong. - 0.19% (13)
and khai n. - 0.19% (13)
on human factors - 0.17% (12)
in computing systems - 0.17% (12)
the sigchi conference - 0.17% (12)
sigchi conference on - 0.17% (12)
computing systems (chi - 0.17% (12)
factors in computing - 0.17% (12)
of the sigchi - 0.17% (12)
human factors in - 0.17% (12)
of the acm - 0.16% (11)
[pdf] see the - 0.16% (11)
koji yatani, and - 0.14% (10)
[pdf] [publisher] see - 0.13% (9)
text entry method - 0.13% (9)
mobile touch-screen devices - 0.1% (7)
international conference on - 0.1% (7)
yatani, and khai - 0.1% (7)
university of tokyo - 0.09% (6)
text entry methods - 0.09% (6)
[video] see the - 0.09% (6)
[pdf] [video] see - 0.09% (6)
spatial tactile feedback - 0.09% (6)
and koji yatani. - 0.07% (5)
honorable mention award - 0.07% (5)
in a museum - 0.07% (5)
transfer techniques for - 0.07% (5)
and hiromichi hashizume. - 0.07% (5)
the university of - 0.07% (5)
[pdf] [video] [publisher] - 0.07% (5)
symposium on user - 0.07% (5)
interface software and - 0.07% (5)
techniques for mobile - 0.07% (5)
for mobile touch-screen - 0.07% (5)
award winner [pdf] - 0.07% (5)
the acm symposium - 0.07% (5)
in this paper, - 0.07% (5)
on user interface - 0.07% (5)
this paper, we - 0.07% (5)
software and technology - 0.07% (5)
children's collaborative learning - 0.06% (4)
we present the - 0.06% (4)
this work, we - 0.06% (4)
intuitive information transfer - 0.06% (4)
stylus-based text entry - 0.06% (4)
on handheld devices - 0.06% (4)
a user interface - 0.06% (4)
paper, we present - 0.06% (4)
stationary and mobile - 0.06% (4)
[video] [publisher] see - 0.06% (4)
masanori sugimoto, and - 0.06% (4)
and tactile feedback - 0.06% (4)
visual and tactile - 0.06% (4)
sigchi international conference - 0.06% (4)
acm sigchi international - 0.06% (4)
human computer interaction - 0.06% (4)
2014. honorable mention - 0.06% (4)
museum with pdas - 0.06% (4)
our user study - 0.06% (4)
a location-sensitive and - 0.04% (3)
interaction with mobile - 0.04% (3)
we developed a - 0.04% (3)
method for mobile - 0.04% (3)
adjective-noun word pairs - 0.04% (3)
on human computer - 0.04% (3)
methods on handheld - 0.04% (3)
[pdf] koji yatani, - 0.04% (3)
representing geographical information - 0.04% (3)
the design of - 0.04% (3)
koji yatani and - 0.04% (3)
a wearable acoustic - 0.04% (3)
gestural input in - 0.04% (3)
in a location-sensitive - 0.04% (3)
the acm conference - 0.04% (3)
using gestural input - 0.04% (3)
masanori sugimoto and - 0.04% (3)
2013. [pdf] see - 0.04% (3)
of stylus-based text - 0.04% (3)
an evaluation of - 0.04% (3)
in collaborative handheld - 0.04% (3)
on spatial coordination - 0.04% (3)
entry methods on - 0.04% (3)
devices & services - 0.04% (3)
spatial coordination in - 0.04% (3)
entry method for - 0.04% (3)
toss-it: intuitive information - 0.04% (3)
app for improving - 0.04% (3)
supporting children's collaborative - 0.04% (3)
& services (mobilehci - 0.04% (3)
with mobile devices - 0.04% (3)
koji yatani, koiti - 0.04% (3)
learning in a - 0.04% (3)
a mobile interface - 0.04% (3)
help the user - 0.04% (3)
acm conference on - 0.04% (3)
koji yatani, masanori - 0.04% (3)
a group-based intervention - 0.04% (3)
computer interaction with - 0.04% (3)
intervention app for - 0.04% (3)
for improving self-regulation - 0.04% (3)
group-based intervention app - 0.04% (3)
system for supporting - 0.04% (3)
ieice transactions on - 0.04% (3)
siggraph asia 2015 - 0.04% (3)
associate professor in - 0.04% (3)
the pen to - 0.04% (3)
in extended abstracts - 0.04% (3)
a system for - 0.04% (3)
the acm sigchi - 0.04% (3)
study shows that - 0.04% (3)
[publisher] koji yatani, - 0.04% (3)
mobile devices & - 0.04% (3)
the user's attention - 0.04% (3)
october 2009. [pdf] - 0.04% (3)
an exploratory study - 0.04% (3)
reviews using adjective-noun - 0.04% (3)
yatani, koiti tamura, - 0.04% (3)
systems (chi 2014), - 0.04% (3)
how and why - 0.04% (3)
extended abstracts of - 0.04% (3)
2009. [pdf] [publisher] - 0.04% (3)
yatani, and darren - 0.04% (3)
open source contributors - 0.03% (2)
eunyoung chung, carlos - 0.03% (2)
tactile feedback for - 0.03% (2)
constraints and practices - 0.03% (2)
tactile feedback to - 0.03% (2)
interface with semantic - 0.03% (2)
semfeel: a user - 0.03% (2)
open source software - 0.03% (2)
of the ieee - 0.03% (2)
of local constraints - 0.03% (2)
influence of local - 0.03% (2)
use diagrams in - 0.03% (2)
publication list understanding - 0.03% (2)
situated sustainability: the - 0.03% (2)
a mobile touch-screen - 0.03% (2)
(uist 2009), pp. - 0.03% (2)
the development of - 0.03% (2)
on mobile phone - 0.03% (2)
focus on the - 0.03% (2)
mobile phone situated - 0.03% (2)
sustainability: the influence - 0.03% (2)
and practices on - 0.03% (2)
for people with - 0.03% (2)
interactive intelligent systems - 0.03% (2)
escape: a target - 0.03% (2)
museum with pdas" - 0.03% (2)
koiti tamura, keiichi - 0.03% (2)
"toss-it: intuitive information - 0.03% (2)
toss and swing - 0.03% (2)
ieee workshop on - 0.03% (2)
2004. [pdf] [publisher] - 0.03% (2)
exploration in a - 0.03% (2)
on the pdas - 0.03% (2)
contents on the - 0.03% (2)
fusako kusunoki. "musex: - 0.03% (2)
sugimoto, and fusako - 0.03% (2)
and swing actions" - 0.03% (2)
kusunoki. "musex: a - 0.03% (2)
with pdas" in - 0.03% (2)
department of electrical - 0.03% (2)
engineering and information - 0.03% (2)
affiliated with emerging - 0.03% (2)
design and informatics - 0.03% (2)
course, interfaculty initiative - 0.03% (2)
university of toronto - 0.03% (2)
at the university - 0.03% (2)
as a program - 0.03% (2)
no. 1, pp. - 0.03% (2)
hiromichi hashizume. "toss-it: - 0.03% (2)
selection technique using - 0.03% (2)
whack-a-mole game using - 0.03% (2)
a target selection - 0.03% (2)
technique using visually-cued - 0.03% (2)
devices in stationary - 0.03% (2)
and mobile scenarios - 0.03% (2)
personal digital assistant - 0.03% (2)
and mobile settings. - 0.03% (2)
truong. "an evaluation - 0.03% (2)
vol. 5, no. - 0.03% (2)
"an evaluation of - 0.03% (2)
hiromichi hashizume. "a - 0.03% (2)
tamura, keiichi hiroki, - 0.03% (2)
multiplayer whack-a-mole game - 0.03% (2)
and immersive environment" - 0.03% (2)
of international conference - 0.03% (2)
september 2005. [pdf] - 0.03% (2)
, september 2005. - 0.03% (2)
and accurate positioning - 0.03% (2)
wilson, hrvoje benko, - 0.03% (2)
ultrasonic phase accordance - 0.03% (2)
technique using ultrasonic - 0.03% (2)
2005. [pdf] [publisher] - 0.03% (2)
and bill buxton. - 0.03% (2)
2011. [pdf] [publisher] - 0.03% (2)
jenny rodenhouse, andy - 0.03% (2)
a sight-free one-handed - 0.03% (2)
on smartphones: an - 0.03% (2)
exploratory study on - 0.03% (2)
smartphone overuse among - 0.03% (2)
narrative-driven presentation planning - 0.03% (2)
at the expense - 0.03% (2)
attention in ways - 0.03% (2)
pitchperfect: integrated rehearsal - 0.03% (2)
environment for structured - 0.03% (2)
our qualitative results - 0.03% (2)
integrated rehearsal environment - 0.03% (2)
for structured presentation - 0.03% (2)
winner [pdf] see - 0.03% (2)
entry methods traditionally - 0.03% (2)
among college students - 0.03% (2)
that users can - 0.03% (2)
allows the user - 0.03% (2)
rate in the - 0.03% (2)
sight-free one-handed text - 0.03% (2)
mobile touch-screen devices" - 0.03% (2)
sidepoint: a peripheral - 0.03% (2)
knowledge panel for - 0.03% (2)
presentation slide authoring - 0.03% (2)
peripheral knowledge panels - 0.03% (2)
a peripheral knowledge - 0.03% (2)
panel for presentation - 0.03% (2)
hyperslides: dynamic presentation - 0.03% (2)
we conducted a - 0.03% (2)
the impact of - 0.03% (2)
on smartphone overuse - 0.03% (2)
(chi 2013), pp. - 0.03% (2)
on the canvas - 0.03% (2)
information systems (eeis) - 0.03% (2)
school of engineering - 0.03% (2)
with emerging design - 0.03% (2)
and informatics course, - 0.03% (2)
interfaculty initiative in - 0.03% (2)
interests lie in - 0.03% (2)
human-computer interaction (hci) - 0.03% (2)
and ubiquitous computing. - 0.03% (2)
am interested in - 0.03% (2)
if you are - 0.03% (2)
autocomplete hand-drawn animations - 0.03% (2)
a system to - 0.03% (2)
repetition of objects - 0.03% (2)
hooked on smartphones: - 0.03% (2)
of limiting smartphone - 0.03% (2)
also found that - 0.03% (2)
self-regulation of limiting - 0.03% (2)
computer supported cooperative - 0.03% (2)
mobile interface for - 0.03% (2)
direct comparison using - 0.03% (2)
for direct comparison - 0.03% (2)
time support for - 0.03% (2)
in their ability - 0.03% (2)
12 participants who - 0.03% (2)
section-based time support - 0.03% (2)
services (mobilehci 2014), - 0.03% (2)
winner [pdf] [video] - 0.03% (2)
their ability to - 0.03% (2)
may 2013. [pdf] - 0.03% (2)
pahud, nicole coddington, - 0.03% (2)
koji yatani, michael - 0.03% (2)
the 1line keyboard: - 0.03% (2)
a qwerty layout - 0.03% (2)
in a single - 0.03% (2)
user experience on - 0.03% (2)
these devices. in - 0.03% (2)
the user can - 0.03% (2)
publication list review - 0.03% (2)
spotlight: a user - 0.03% (2)
interface for summarizing - 0.03% (2)
user-generated reviews using - 0.03% (2)
yatani, michael novati, - 0.03% (2)
andrew trusty, and - 0.03% (2)
novati, andrew trusty, - 0.03% (2)
of unimanual multi-finger - 0.03% (2)
for summarizing user-generated - 0.03% (2)
gestures from the - 0.03% (2)
developed a system - 0.03% (2)
(uist 2010), pp. - 0.03% (2)
october 2010. [pdf] - 0.03% (2)
touch = new - 0.03% (2)
hinckley, koji yatani, - 0.03% (2)
michel pahud, nicole - 0.03% (2)
coddington, jenny rodenhouse, - 0.03% (2)
andy wilson, hrvoje - 0.03% (2)
benko, and bill - 0.03% (2)
technology (uist 2010), - 0.03% (2)
koji yatani, michel - 0.03% (2)
electrical engineering and - 0.03% (2)
david dearman, koji - 0.03% (2)
interventions and tools - 0.03% (2)
user study shows - 0.03% (2)
can help the - 0.03% (2)
sensor for activity - 0.03% (2)
the user may - 0.03% (2)
acoustic sensor for - 0.03% (2)
conference on ubiquitous - 0.03% (2)
to visually impaired - 0.03% (2)
people using spatial - 0.03% (2)
people with visual - 0.03% (2)
to understand the - 0.03% (2)
an area of - 0.03% (2)
to help the - 0.03% (2)
spatial relationships between - 0.03% (2)
that participants could - 0.03% (2)
chun yat li, - 0.03% (2)
the spatial relationships - 0.03% (2)
information to visually - 0.03% (2)
impaired people using - 0.03% (2)
investigating effects of - 0.03% (2)
feedback on spatial - 0.03% (2)
coordination in collaborative - 0.03% (2)
tactile feedback can - 0.03% (2)
effects of visual - 0.03% (2)
conference on computer - 0.03% (2)
supported cooperative work - 0.03% (2)
design of unimanual - 0.03% (2)
multi-finger pie menu - 0.03% (2)
to the user. - 0.03% (2)
the field of - 0.03% (2)

Here you can find chart of all your popular one, two and three word phrases. Google and others search engines means your page is about words you use frequently.

Copyright © 2015-2016 hupso.pl. All rights reserved. FB | +G | Twitter

Hupso.pl jest serwisem internetowym, w którym jednym kliknieciem możesz szybko i łatwo sprawdź stronę www pod kątem SEO. Oferujemy darmowe pozycjonowanie stron internetowych oraz wycena domen i stron internetowych. Prowadzimy ranking polskich stron internetowych oraz ranking stron alexa.