{"id":32,"date":"2021-01-19T15:16:35","date_gmt":"2021-01-19T15:16:35","guid":{"rendered":"https:\/\/cix.cs.uni-saarland.de\/?page_id=32"},"modified":"2021-02-01T10:45:47","modified_gmt":"2021-02-01T10:45:47","slug":"publications","status":"publish","type":"page","link":"https:\/\/cix.cs.uni-saarland.de\/?page_id=32","title":{"rendered":"Publications"},"content":{"rendered":"\n<div class=\"teachpress_pub_list\"><form name=\"tppublistform\" method=\"get\"><a name=\"tppubs\" id=\"tppubs\"><\/a><\/form><div class=\"teachpress_publication_list\"><h3 class=\"tp_h3\" id=\"tp_h3_2026\">2026<\/h3><div class=\"tp_publication tp_publication_inproceedings\"><div class=\"tp_pub_image_left\"><img decoding=\"async\" name=\"Efficient Human-in-the-Loop Optimization via Priors Learned from User Models\" src=\"https:\/\/cix.cs.uni-saarland.de\/wp-content\/uploads\/2026\/04\/image-1-300x135.png\" width=\"70\" alt=\"Efficient Human-in-the-Loop Optimization via Priors Learned from User Models\" \/><\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Liao, Yi-Chi;  Belo, Jo\u00e3o;  Moon, Hee-Seung;  Steimle, J\u00fcrgen;  Feit, Anna Maria<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('29','tp_links')\" style=\"cursor:pointer;\">Efficient Human-in-the-Loop Optimization via Priors Learned from User Models<\/a> <span class=\"tp_pub_type tp_  inproceedings\">Proceedings Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_booktitle\">Proceedings of the ACM CHI Conference on Human Factors in Computing Systems, <\/span><span class=\"tp_pub_additional_publisher\">Association for Computing Machinery, <\/span><span class=\"tp_pub_additional_address\">New York, NY, USA, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>, <span class=\"tp_pub_additional_isbn\">ISBN: 979-8-4007-2278-3\/26\/04<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_29\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('29','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_29\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('29','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_29\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('29','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_29\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@inproceedings{10.1145\/3772318.3791976,<br \/>\r\ntitle = {Efficient Human-in-the-Loop Optimization via Priors Learned from User Models},<br \/>\r\nauthor = {Yi-Chi Liao and Jo\u00e3o Belo and Hee-Seung Moon and J\u00fcrgen Steimle and Anna Maria Feit},<br \/>\r\nurl = {https:\/\/lnkd.in\/e4nYASE6<br \/>\r\nhttps:\/\/arxiv.org\/abs\/2510.07754<br \/>\r\nhttps:\/\/lnkd.in\/eg2hjQed},<br \/>\r\ndoi = {10.1145\/3772318.3791976},<br \/>\r\nisbn = {979-8-4007-2278-3\/26\/04},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-04-13},<br \/>\r\nurldate = {2026-04-13},<br \/>\r\nbooktitle = {Proceedings of the ACM CHI Conference on Human Factors in Computing Systems},<br \/>\r\nnumber = {416},<br \/>\r\npublisher = {Association for Computing Machinery},<br \/>\r\naddress = {New York, NY, USA},<br \/>\r\nabstract = {Human-in-the-loop optimization identifies optimal interface designs by iteratively observing user performance. However, it often requires numerous iterations due to the lack of prior information. While recent approaches have accelerated this process by leveraging previous optimization data, collecting user data remains costly and often impractical. We present a conceptual framework, Human-in-the-Loop Optimization with Model-Informed Priors (HOMI), which augments human-in-the-loop optimization with a training phase where the optimizer learns adaptation strategies from diverse, synthetic user data generated with predictive models before deployment. To realize HOMI, we introduce Neural Acquisition Function+ (NAF+), a Bayesian optimization method featuring a neural acquisition function trained with reinforcement learning. NAF+ learns optimization strategies from large-scale synthetic data, improving efficiency in real-time optimization with users. We evaluate HOMI and NAF+ with mid-air keyboard optimization, a representative VR input task. Our work presents a new approach for more efficient interface adaptation by bridging in situ and in silico optimization processes.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {inproceedings}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('29','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_29\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Human-in-the-loop optimization identifies optimal interface designs by iteratively observing user performance. However, it often requires numerous iterations due to the lack of prior information. While recent approaches have accelerated this process by leveraging previous optimization data, collecting user data remains costly and often impractical. We present a conceptual framework, Human-in-the-Loop Optimization with Model-Informed Priors (HOMI), which augments human-in-the-loop optimization with a training phase where the optimizer learns adaptation strategies from diverse, synthetic user data generated with predictive models before deployment. To realize HOMI, we introduce Neural Acquisition Function+ (NAF+), a Bayesian optimization method featuring a neural acquisition function trained with reinforcement learning. NAF+ learns optimization strategies from large-scale synthetic data, improving efficiency in real-time optimization with users. We evaluate HOMI and NAF+ with mid-air keyboard optimization, a representative VR input task. Our work presents a new approach for more efficient interface adaptation by bridging in situ and in silico optimization processes.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('29','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_29\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/lnkd.in\/e4nYASE6\" title=\"https:\/\/lnkd.in\/e4nYASE6\" target=\"_blank\">https:\/\/lnkd.in\/e4nYASE6<\/a><\/li><li><i class=\"ai ai-arxiv\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/arxiv.org\/abs\/2510.07754\" title=\"https:\/\/arxiv.org\/abs\/2510.07754\" target=\"_blank\">https:\/\/arxiv.org\/abs\/2510.07754<\/a><\/li><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/lnkd.in\/eg2hjQed\" title=\"https:\/\/lnkd.in\/eg2hjQed\" target=\"_blank\">https:\/\/lnkd.in\/eg2hjQed<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1145\/3772318.3791976\" title=\"Follow DOI:10.1145\/3772318.3791976\" target=\"_blank\">doi:10.1145\/3772318.3791976<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('29','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><div class=\"tp_publication tp_publication_inproceedings\"><div class=\"tp_pub_image_left\"><img decoding=\"async\" name=\"Design Considerations for Human Oversight of AI: Insights from Co-Design Workshops and Work Design Theory\" src=\"https:\/\/cix.cs.uni-saarland.de\/wp-content\/uploads\/2026\/04\/DesignConsiderations.png\" width=\"70\" alt=\"Design Considerations for Human Oversight of AI: Insights from Co-Design Workshops and Work Design Theory\" \/><\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Faas, Cedric;  Kerstan, Sophie;  Uth, Richard;  Langer, Markus;  Feit, Anna Maria<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('30','tp_links')\" style=\"cursor:pointer;\">Design Considerations for Human Oversight of AI: Insights from Co-Design Workshops and Work Design Theory<\/a> <span class=\"tp_pub_type tp_  inproceedings\">Proceedings Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_booktitle\">Proceedings of the 31st International Conference on Intelligent User Interfaces, <\/span><span class=\"tp_pub_additional_pages\">pp. 804\u2013821, <\/span><span class=\"tp_pub_additional_publisher\">Association for Computing Machinery, <\/span><span class=\"tp_pub_additional_address\">New York, NY, USA, <\/span><span class=\"tp_pub_additional_year\">2026<\/span>, <span class=\"tp_pub_additional_isbn\">ISBN: 979-8-4007-1984-4<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_30\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('30','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_30\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('30','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_30\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('30','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_30\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@inproceedings{faas_design_2025,<br \/>\r\ntitle = {Design Considerations for Human Oversight of AI: Insights from Co-Design Workshops and Work Design Theory},<br \/>\r\nauthor = { Cedric Faas and Sophie Kerstan and Richard Uth and Markus Langer and Anna Maria Feit},<br \/>\r\nurl = {https:\/\/dl.acm.org\/doi\/10.1145\/3742413.3789100},<br \/>\r\ndoi = {10.1145\/3742413.3789100},<br \/>\r\nisbn = {979-8-4007-1984-4},<br \/>\r\nyear  = {2026},<br \/>\r\ndate = {2026-03-22},<br \/>\r\nurldate = {2026-03-22},<br \/>\r\nbooktitle = {Proceedings of the 31st International Conference on Intelligent User Interfaces},<br \/>\r\npages = {804\u2013821},<br \/>\r\npublisher = {Association for Computing Machinery},<br \/>\r\naddress = {New York, NY, USA},<br \/>\r\nseries = {IUI &#039;26},<br \/>\r\nabstract = {As AI systems become increasingly capable and autonomous, domain experts\u2019 roles are shifting from performing tasks themselves to overseeing AI-generated outputs. Such oversight is critical, as undetected errors can have serious consequences or undermine the benefits of AI. Effective oversight, however, depends not only on detecting and correcting AI errors but also on the motivation and engagement of the oversight personnel and the meaningfulness they see in their work. Yet little is known about how domain experts approach and experience the oversight task and what should be considered to design effective and motivational interfaces that support human oversight. To address these questions, we conducted four co-design workshops with domain experts from psychology and computer science. We asked them to first oversee an AI-based grading system, and then discuss their experiences and needs during oversight. Finally, they collaboratively prototyped interfaces that could support them in their oversight task. Our thematic analysis revealed four key user requirements: understanding tasks and responsibilities, gaining insight into the AI\u2019s decision-making, contributing meaningfully to the process, and collaborating with peers and the AI. We integrated these empirical insights with the SMART model of work design to develop a framework of twelve design considerations with increased transferability compared to the identified user requirements. Our framework links interface characteristics and user requirements to the psychological processes underlying effective and satisfying work. Being grounded in work design theory and overlapping with existing guidelines for human\u2013AI interaction, we expect these considerations to be applicable across domains and discuss how they go beyond existing guidelines for human-AI interaction to inform the design of engaging and meaningful interfaces that support human oversight of AI-based systems.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {inproceedings}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('30','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_30\" style=\"display:none;\"><div class=\"tp_abstract_entry\">As AI systems become increasingly capable and autonomous, domain experts\u2019 roles are shifting from performing tasks themselves to overseeing AI-generated outputs. Such oversight is critical, as undetected errors can have serious consequences or undermine the benefits of AI. Effective oversight, however, depends not only on detecting and correcting AI errors but also on the motivation and engagement of the oversight personnel and the meaningfulness they see in their work. Yet little is known about how domain experts approach and experience the oversight task and what should be considered to design effective and motivational interfaces that support human oversight. To address these questions, we conducted four co-design workshops with domain experts from psychology and computer science. We asked them to first oversee an AI-based grading system, and then discuss their experiences and needs during oversight. Finally, they collaboratively prototyped interfaces that could support them in their oversight task. Our thematic analysis revealed four key user requirements: understanding tasks and responsibilities, gaining insight into the AI\u2019s decision-making, contributing meaningfully to the process, and collaborating with peers and the AI. We integrated these empirical insights with the SMART model of work design to develop a framework of twelve design considerations with increased transferability compared to the identified user requirements. Our framework links interface characteristics and user requirements to the psychological processes underlying effective and satisfying work. Being grounded in work design theory and overlapping with existing guidelines for human\u2013AI interaction, we expect these considerations to be applicable across domains and discuss how they go beyond existing guidelines for human-AI interaction to inform the design of engaging and meaningful interfaces that support human oversight of AI-based systems.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('30','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_30\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3742413.3789100\" title=\"https:\/\/dl.acm.org\/doi\/10.1145\/3742413.3789100\" target=\"_blank\">https:\/\/dl.acm.org\/doi\/10.1145\/3742413.3789100<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1145\/3742413.3789100\" title=\"Follow DOI:10.1145\/3742413.3789100\" target=\"_blank\">doi:10.1145\/3742413.3789100<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('30','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><h3 class=\"tp_h3\" id=\"tp_h3_2025\">2025<\/h3><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_image_left\"><img decoding=\"async\" name=\"How We Type with Word Suggestions: Understanding Visual Attention and Checking Behavior during Mobile Text Input\" src=\"https:\/\/cix.cs.uni-saarland.de\/wp-content\/uploads\/2026\/04\/Ubicomp_2025_Finland_Logo-300x169-2.png\" width=\"70\" alt=\"How We Type with Word Suggestions: Understanding Visual Attention and Checking Behavior during Mobile Text Input\" \/><\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Li, Yang;  Feit, Anna Maria<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('27','tp_links')\" style=\"cursor:pointer;\">How We Type with Word Suggestions: Understanding Visual Attention and Checking Behavior during Mobile Text Input<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., <\/span><span class=\"tp_pub_additional_volume\">vol. 9, <\/span><span class=\"tp_pub_additional_issue\">iss. 3, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_27\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('27','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_27\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('27','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_27\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('27','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_27\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{10.1145\/3749520,<br \/>\r\ntitle = {How We Type with Word Suggestions: Understanding Visual Attention and Checking Behavior during Mobile Text Input},<br \/>\r\nauthor = { Yang Li and Anna Maria Feit},<br \/>\r\nurl = {https:\/\/doi.org\/10.1145\/3749520<br \/>\r\nhttps:\/\/osf.io\/g2457},<br \/>\r\ndoi = {10.1145\/3749520},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-09-01},<br \/>\r\nurldate = {2025-09-01},<br \/>\r\njournal = {Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.},<br \/>\r\nvolume = {9},<br \/>\r\nissue = {3},<br \/>\r\npublisher = {Association for Computing Machinery},<br \/>\r\naddress = {New York, NY, USA},<br \/>\r\nabstract = {Word suggestions are commonly used when people type on mobile devices. However, how users adjust their typing behavior and visual attention to integrate the use of word suggestions and whether they are effective in doing so remains unclear, mainly due to the lack of gaze data in realistic settings. In this paper, we conduct an eye-tracking study of word suggestion users transcribing and composing text on their own phones and keyboards. Our analysis reveals that users frequently checked the suggestion list without picking a suggestion, yielding a 68% failure rate. Screen recordings show that only about half of these Failed Suggestions can be attributed to the algorithm&#039;s performance. In 43.6% of cases, users typed the word manually even though they fixated on the correctly suggested word. We analyze the dynamics of users&#039; checking behavior and quantify the time cost of checking for word suggestions. Overall, we find that despite using word suggestions on a daily basis, users&#039; checking behavior is not well aligned with the performance of the suggestion algorithm, resulting in a decrease of typing speed. These findings have implications for the design of intelligent text entry systems and AI support in general, and our WS-Gaze dataset will support future research in this important direction.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('27','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_27\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Word suggestions are commonly used when people type on mobile devices. However, how users adjust their typing behavior and visual attention to integrate the use of word suggestions and whether they are effective in doing so remains unclear, mainly due to the lack of gaze data in realistic settings. In this paper, we conduct an eye-tracking study of word suggestion users transcribing and composing text on their own phones and keyboards. Our analysis reveals that users frequently checked the suggestion list without picking a suggestion, yielding a 68% failure rate. Screen recordings show that only about half of these Failed Suggestions can be attributed to the algorithm&#039;s performance. In 43.6% of cases, users typed the word manually even though they fixated on the correctly suggested word. We analyze the dynamics of users&#039; checking behavior and quantify the time cost of checking for word suggestions. Overall, we find that despite using word suggestions on a daily basis, users&#039; checking behavior is not well aligned with the performance of the suggestion algorithm, resulting in a decrease of typing speed. These findings have implications for the design of intelligent text entry systems and AI support in general, and our WS-Gaze dataset will support future research in this important direction.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('27','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_27\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/doi.org\/10.1145\/3749520\" title=\"https:\/\/doi.org\/10.1145\/3749520\" target=\"_blank\">https:\/\/doi.org\/10.1145\/3749520<\/a><\/li><li><i class=\"ai ai-osf\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/osf.io\/g2457\" title=\"https:\/\/osf.io\/g2457\" target=\"_blank\">https:\/\/osf.io\/g2457<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1145\/3749520\" title=\"Follow DOI:10.1145\/3749520\" target=\"_blank\">doi:10.1145\/3749520<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('27','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_image_left\"><img decoding=\"async\" name=\"RelEYEance: Gaze-Based Assessment of Users\u2019 AI-reliance at Run-Time\" src=\"https:\/\/cix.cs.uni-saarland.de\/wp-content\/uploads\/2025\/04\/TODAI-ETRA25-LOGO.png\" width=\"70\" alt=\"RelEYEance: Gaze-Based Assessment of Users\u2019 AI-reliance at Run-Time\" \/><\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Wu, Zekun;  Wang, Yao;  Langer, Markus;  Feit, Anna Maria<\/p><p class=\"tp_pub_title\"><a href=\"https:\/\/cix.cs.uni-saarland.de\/?page_id=538\">RelEYEance: Gaze-Based Assessment of Users\u2019 AI-reliance at Run-Time<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Proc. ACM Hum.-Comput. Interact., <\/span><span class=\"tp_pub_additional_volume\">vol. 9, <\/span><span class=\"tp_pub_additional_number\">no. ETRA16, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_25\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('25','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_25\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('25','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_25\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('25','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_25\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{WuETRA2025,<br \/>\r\ntitle = {RelEYEance: Gaze-Based Assessment of Users\u2019 AI-reliance at Run-Time},<br \/>\r\nauthor = {Zekun Wu and Yao Wang and Markus Langer and Anna Maria Feit},<br \/>\r\nurl = {https:\/\/doi.org\/10.1145\/3725841},<br \/>\r\ndoi = {10.1145.3725841},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-05-23},<br \/>\r\nurldate = {2025-05-23},<br \/>\r\njournal = {Proc. ACM Hum.-Comput. Interact.},<br \/>\r\nvolume = {9},<br \/>\r\nnumber = {ETRA16},<br \/>\r\npublisher = {Association for Computing Machinery},<br \/>\r\naddress = {New York, NY, USA},<br \/>\r\nabstract = {In time-critical detection tasks, such as drone monitoring, a key condition for users to effectively leverage AI assistance is to find an appropriate trade-off between making fast decisions and verifying AI suggestions, which we refer to as appropriate user reliance. However, assessing such reliance is often oversimplified by focusing solely on task outcomes, potentially overlooking whether users properly verify AI messages. We collected eye-tracking data from an AI-assisted monitoring task and developed a gaze-based reliance model: RelEYEance, to assess the extent of user reliance on AI-suggested alarms. We found that gaze patterns related to verification behaviors distinguish between appropriate reliance, over-reliance, and under-reliance, influencing task performance. We validated our model in a second user study, showing it can reliably detect users\u2019 over- and under-reliance at run-time, which could be used e.g. for issuing intervention messages. The results demonstrate the potential for real-time human-AI reliance assessment, facilitating adaptive reliance calibration.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('25','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_25\" style=\"display:none;\"><div class=\"tp_abstract_entry\">In time-critical detection tasks, such as drone monitoring, a key condition for users to effectively leverage AI assistance is to find an appropriate trade-off between making fast decisions and verifying AI suggestions, which we refer to as appropriate user reliance. However, assessing such reliance is often oversimplified by focusing solely on task outcomes, potentially overlooking whether users properly verify AI messages. We collected eye-tracking data from an AI-assisted monitoring task and developed a gaze-based reliance model: RelEYEance, to assess the extent of user reliance on AI-suggested alarms. We found that gaze patterns related to verification behaviors distinguish between appropriate reliance, over-reliance, and under-reliance, influencing task performance. We validated our model in a second user study, showing it can reliably detect users\u2019 over- and under-reliance at run-time, which could be used e.g. for issuing intervention messages. The results demonstrate the potential for real-time human-AI reliance assessment, facilitating adaptive reliance calibration.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('25','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_25\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/doi.org\/10.1145\/3725841\" title=\"https:\/\/doi.org\/10.1145\/3725841\" target=\"_blank\">https:\/\/doi.org\/10.1145\/3725841<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1145.3725841\" title=\"Follow DOI:10.1145.3725841\" target=\"_blank\">doi:10.1145.3725841<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('25','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_image_left\"><img decoding=\"async\" name=\"Understanding and Predicting Temporal Visual Attention Influenced by Dynamic Highlights in Monitoring Task\" src=\"https:\/\/cix.cs.uni-saarland.de\/wp-content\/uploads\/2026\/04\/Ieee_blue-300x107.jpg\" width=\"70\" alt=\"Understanding and Predicting Temporal Visual Attention Influenced by Dynamic Highlights in Monitoring Task\" \/><\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Wu, Zekun;  Feit, Anna Maria<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('28','tp_links')\" style=\"cursor:pointer;\">Understanding and Predicting Temporal Visual Attention Influenced by Dynamic Highlights in Monitoring Task<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">IEEE Transactions on Human-Machine Systems, <\/span><span class=\"tp_pub_additional_pages\">pp. 1-11, <\/span><span class=\"tp_pub_additional_year\">2025<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_resource_link\"><a id=\"tp_links_sh_28\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('28','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_28\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('28','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_28\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{wu2025understanding,<br \/>\r\ntitle = {Understanding and Predicting Temporal Visual Attention Influenced by Dynamic Highlights in Monitoring Task},<br \/>\r\nauthor = { Zekun Wu and Anna Maria Feit},<br \/>\r\nurl = {https:\/\/arxiv.org\/abs\/2510.08777},<br \/>\r\ndoi = {10.1109\/THMS.2025.3614364},<br \/>\r\nyear  = {2025},<br \/>\r\ndate = {2025-01-01},<br \/>\r\nurldate = {2025-01-01},<br \/>\r\njournal = {IEEE Transactions on Human-Machine Systems},<br \/>\r\npages = {1-11},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('28','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_28\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-arxiv\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/arxiv.org\/abs\/2510.08777\" title=\"https:\/\/arxiv.org\/abs\/2510.08777\" target=\"_blank\">https:\/\/arxiv.org\/abs\/2510.08777<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1109\/THMS.2025.3614364\" title=\"Follow DOI:10.1109\/THMS.2025.3614364\" target=\"_blank\">doi:10.1109\/THMS.2025.3614364<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('28','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><h3 class=\"tp_h3\" id=\"tp_h3_2024\">2024<\/h3><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_image_left\"><a href=\"https:\/\/cix.cs.uni-saarland.de\/?page_id=460\" title=\"Shifting Focus with HCEye: Exploring the Dynamics of Visual Highlighting and Cognitive Load on User Attention and Saliency Prediction\"><img decoding=\"async\" name=\"Shifting Focus with HCEye: Exploring the Dynamics of Visual Highlighting and Cognitive Load on User Attention and Saliency Prediction\" src=\"https:\/\/cix.cs.uni-saarland.de\/wp-content\/uploads\/2024\/04\/etrapaper.jpg\" width=\"70\" alt=\"Shifting Focus with HCEye: Exploring the Dynamics of Visual Highlighting and Cognitive Load on User Attention and Saliency Prediction\" \/><\/a><\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Das, Anwesha;  Wu, Zekun;  \u0160krjanec, Iza;  Feit, Anna Maria<\/p><p class=\"tp_pub_title\"><a href=\"https:\/\/cix.cs.uni-saarland.de\/?page_id=460\">Shifting Focus with HCEye: Exploring the Dynamics of Visual Highlighting and Cognitive Load on User Attention and Saliency Prediction<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Proceedings of the ACM on Human-Computer Interaction, <\/span><span class=\"tp_pub_additional_volume\">vol. 8, No. ETRA , <\/span><span class=\"tp_pub_additional_number\">no. 236 , <\/span><span class=\"tp_pub_additional_year\">2024<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_24\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('24','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_24\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('24','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_24\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('24','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_24\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Das2024,<br \/>\r\ntitle = {Shifting Focus with HCEye: Exploring the Dynamics of Visual Highlighting and Cognitive Load on User Attention and Saliency Prediction},<br \/>\r\nauthor = {Anwesha Das and Zekun Wu and Iza \u0160krjanec and Anna Maria Feit},<br \/>\r\nurl = {https:\/\/arxiv.org\/abs\/2404.14232<br \/>\r\nhttps:\/\/osf.io\/x8p9b\/},<br \/>\r\ndoi = {doi.org\/10.1145\/3655610},<br \/>\r\nyear  = {2024},<br \/>\r\ndate = {2024-06-03},<br \/>\r\nurldate = {2024-06-03},<br \/>\r\nbooktitle = {Proceedings of the ACM Symposium on },<br \/>\r\njournal = {Proceedings of the ACM on Human-Computer Interaction},<br \/>\r\nvolume = {8, No. ETRA },<br \/>\r\nnumber = {236 },<br \/>\r\nabstract = {Visual highlighting can guide user attention in complex interfaces. However, its effectiveness under limited attentional capacities is underexplored. This paper examines the joint impact of visual highlighting (permanent and dynamic) and dual-task-induced cognitive load on gaze behaviour. Our analysis, using eye-movement data from 27 participants viewing 150 unique webpages reveals that while participants&#039; ability to attend to UI elements decreases with increasing cognitive load, dynamic adaptations (i.e., highlighting) remain attention-grabbing. The presence of these factors significantly alters what people attend to and thus what is salient. Accordingly, we show that state-of-the-art saliency models increase their performance when accounting for different cognitive loads. Our empirical insights, along with our openly available dataset, enhance our understanding of attentional processes in UIs under varying cognitive (and perceptual) loads and open the door for new models that can predict user attention while multitasking.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('24','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_24\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Visual highlighting can guide user attention in complex interfaces. However, its effectiveness under limited attentional capacities is underexplored. This paper examines the joint impact of visual highlighting (permanent and dynamic) and dual-task-induced cognitive load on gaze behaviour. Our analysis, using eye-movement data from 27 participants viewing 150 unique webpages reveals that while participants&#039; ability to attend to UI elements decreases with increasing cognitive load, dynamic adaptations (i.e., highlighting) remain attention-grabbing. The presence of these factors significantly alters what people attend to and thus what is salient. Accordingly, we show that state-of-the-art saliency models increase their performance when accounting for different cognitive loads. Our empirical insights, along with our openly available dataset, enhance our understanding of attentional processes in UIs under varying cognitive (and perceptual) loads and open the door for new models that can predict user attention while multitasking.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('24','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_24\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-arxiv\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/arxiv.org\/abs\/2404.14232\" title=\"https:\/\/arxiv.org\/abs\/2404.14232\" target=\"_blank\">https:\/\/arxiv.org\/abs\/2404.14232<\/a><\/li><li><i class=\"ai ai-osf\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/osf.io\/x8p9b\/\" title=\"https:\/\/osf.io\/x8p9b\/\" target=\"_blank\">https:\/\/osf.io\/x8p9b\/<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/doi.org\/10.1145\/3655610\" title=\"Follow DOI:doi.org\/10.1145\/3655610\" target=\"_blank\">doi:doi.org\/10.1145\/3655610<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('24','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><div class=\"tp_publication tp_publication_inproceedings\"><div class=\"tp_pub_image_left\"><img decoding=\"async\" name=\"An LLM-driven Transcription Task for Mobile Text Entry Studies\" src=\"https:\/\/cix.cs.uni-saarland.de\/wp-content\/uploads\/2026\/04\/mum24-300x151.png\" width=\"70\" alt=\"An LLM-driven Transcription Task for Mobile Text Entry Studies\" \/><\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Komninos, Andreas;  Feit, Anna Maria;  Leiva, Luis A.;  Lehmann, Florian;  Simou, Ioulia;  Minas, Dimosthenis;  Fotopoulos, Aggelos;  Xenos, Michalis<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('26','tp_links')\" style=\"cursor:pointer;\">An LLM-driven Transcription Task for Mobile Text Entry Studies<\/a> <span class=\"tp_pub_type tp_  inproceedings\">Proceedings Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_booktitle\">Proceedings of the International Conference on Mobile and Ubiquitous Multimedia, <\/span><span class=\"tp_pub_additional_pages\">pp. 264\u2013279, <\/span><span class=\"tp_pub_additional_publisher\">Association for Computing Machinery, <\/span><span class=\"tp_pub_additional_address\">New York, NY, USA, <\/span><span class=\"tp_pub_additional_year\">2024<\/span>, <span class=\"tp_pub_additional_isbn\">ISBN: 9798400712838<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_26\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('26','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_26\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('26','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_26\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('26','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_26\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@inproceedings{KomninosMUM24,<br \/>\r\ntitle = {An LLM-driven Transcription Task for Mobile Text Entry Studies},<br \/>\r\nauthor = {Andreas Komninos and Anna Maria Feit and Luis A. Leiva and Florian Lehmann and Ioulia Simou and Dimosthenis Minas and Aggelos Fotopoulos and Michalis Xenos},<br \/>\r\nurl = {https:\/\/www.komninos.info\/?q=node\/186 series = MUM &#039;24},<br \/>\r\ndoi = {10.1145\/3701571.3701586},<br \/>\r\nisbn = {9798400712838},<br \/>\r\nyear  = {2024},<br \/>\r\ndate = {2024-01-01},<br \/>\r\nurldate = {2024-01-01},<br \/>\r\nbooktitle = {Proceedings of the International Conference on Mobile and Ubiquitous Multimedia},<br \/>\r\npages = {264\u2013279},<br \/>\r\npublisher = {Association for Computing Machinery},<br \/>\r\naddress = {New York, NY, USA},<br \/>\r\nabstract = {We explore a novel transcription task in mobile text entry research, presenting stimuli within LLM-generated conversational contexts to improve participant engagement and phrase memorability. We conducted two studies: an eye-tracking study examining participants\u2019 attention when presented with conversational contexts alongside stimuli, and an experiment comparing LLM-generated and human-generated prompt-response pairs in transcription tasks, involving both high and low memorability stimuli. Key findings reveal that presenting conversational contexts improves recall for low memorability phrases and results in fewer uncorrected errors during transcription. No significant effects were observed on other basic text entry metrics, or participant subjective appraisals of engagement with the novel task, suggesting it can be used safely as an alternative to the traditional transcription task. We discuss the potential of LLMs in improving text entry evaluation methods, including generating diverse linguistic styles, emotionally loaded contexts, and even simulating entire evaluation processes. Our study highlights the need for systematic approaches to generate and evaluate LLM outputs for research purposes, and for proposing new metrics and evaluation methods.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {inproceedings}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('26','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_26\" style=\"display:none;\"><div class=\"tp_abstract_entry\">We explore a novel transcription task in mobile text entry research, presenting stimuli within LLM-generated conversational contexts to improve participant engagement and phrase memorability. We conducted two studies: an eye-tracking study examining participants\u2019 attention when presented with conversational contexts alongside stimuli, and an experiment comparing LLM-generated and human-generated prompt-response pairs in transcription tasks, involving both high and low memorability stimuli. Key findings reveal that presenting conversational contexts improves recall for low memorability phrases and results in fewer uncorrected errors during transcription. No significant effects were observed on other basic text entry metrics, or participant subjective appraisals of engagement with the novel task, suggesting it can be used safely as an alternative to the traditional transcription task. We discuss the potential of LLMs in improving text entry evaluation methods, including generating diverse linguistic styles, emotionally loaded contexts, and even simulating entire evaluation processes. Our study highlights the need for systematic approaches to generate and evaluate LLM outputs for research purposes, and for proposing new metrics and evaluation methods.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('26','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_26\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/www.komninos.info\/?q=node\/186 series = MUM \\\\\\&#039;24\" title=\"https:\/\/www.komninos.info\/?q=node\/186 series = MUM \\\\\\&#039;24\" target=\"_blank\">https:\/\/www.komninos.info\/?q=node\/186 series = MUM \\\\\\&#039;24<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1145\/3701571.3701586\" title=\"Follow DOI:10.1145\/3701571.3701586\" target=\"_blank\">doi:10.1145\/3701571.3701586<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('26','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><h3 class=\"tp_h3\" id=\"tp_h3_2023\">2023<\/h3><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_image_left\"><img decoding=\"async\" name=\"Typing Behavior is About More than Speed: Users&#039; Strategies for Choosing Word Suggestions Despite Slower Typing Rates\" src=\"https:\/\/cix.cs.uni-saarland.de\/wp-content\/uploads\/2024\/04\/mobileHCI23.jpg\" width=\"70\" alt=\"Typing Behavior is About More than Speed: Users&#039; Strategies for Choosing Word Suggestions Despite Slower Typing Rates\" \/><\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Lehmann, Florian;  Kornecki, Itto;  Buschek, Daniel;  Feit, Anna Maria<\/p><p class=\"tp_pub_title\"><a href=\"https:\/\/cix.cs.uni-saarland.de\/?page_id=421\">Typing Behavior is About More than Speed: Users&#039; Strategies for Choosing Word Suggestions Despite Slower Typing Rates<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Proc. ACM Hum.-Comput. Interact., <\/span><span class=\"tp_pub_additional_volume\">vol. 7, <\/span><span class=\"tp_pub_additional_number\">no. MHCI, <\/span><span class=\"tp_pub_additional_year\">2023<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_23\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('23','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_23\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('23','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_23\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('23','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_23\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{10.1145\/3604276,<br \/>\r\ntitle = {Typing Behavior is About More than Speed: Users&#039; Strategies for Choosing Word Suggestions Despite Slower Typing Rates},<br \/>\r\nauthor = {Florian Lehmann and Itto Kornecki and Daniel Buschek and Anna Maria Feit},<br \/>\r\nurl = {https:\/\/dl.acm.org\/doi\/abs\/10.1145\/3604276<br \/>\r\nhttps:\/\/osf.io\/u9aej\/},<br \/>\r\ndoi = {10.1145\/3604276},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-09-01},<br \/>\r\nurldate = {2023-09-01},<br \/>\r\njournal = {Proc. ACM Hum.-Comput. Interact.},<br \/>\r\nvolume = {7},<br \/>\r\nnumber = {MHCI},<br \/>\r\npublisher = {Association for Computing Machinery},<br \/>\r\naddress = {New York, NY, USA},<br \/>\r\nabstract = {Mobile word suggestions can slow down typing, yet are still widely used. To investigate the apparent benefits beyond speed, we analyzed typing behavior of 15,162 users of mobile devices. Controlling for natural typing speed (a confounding factor not considered by prior work), we statistically show that slower typists use suggestions more often but are slowed down by doing so. To better understand how these typists leverage suggestions \u2013 if not to improve their speed \u2013 we extract eight usage strategies, including completion, correction, and next-word prediction. We find that word characteristics, such as length or frequency, along with the strategy, are predictive of whether a user will select a suggestion. We show how to operationalize our findings by building and evaluating a predictive model of suggestion selection. Such a model could be used to augment existing suggestion algorithms to consider people&#039;s strategic use of word predictions beyond speed and keystroke savings.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('23','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_23\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Mobile word suggestions can slow down typing, yet are still widely used. To investigate the apparent benefits beyond speed, we analyzed typing behavior of 15,162 users of mobile devices. Controlling for natural typing speed (a confounding factor not considered by prior work), we statistically show that slower typists use suggestions more often but are slowed down by doing so. To better understand how these typists leverage suggestions \u2013 if not to improve their speed \u2013 we extract eight usage strategies, including completion, correction, and next-word prediction. We find that word characteristics, such as length or frequency, along with the strategy, are predictive of whether a user will select a suggestion. We show how to operationalize our findings by building and evaluating a predictive model of suggestion selection. Such a model could be used to augment existing suggestion algorithms to consider people&#039;s strategic use of word predictions beyond speed and keystroke savings.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('23','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_23\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dl.acm.org\/doi\/abs\/10.1145\/3604276\" title=\"https:\/\/dl.acm.org\/doi\/abs\/10.1145\/3604276\" target=\"_blank\">https:\/\/dl.acm.org\/doi\/abs\/10.1145\/3604276<\/a><\/li><li><i class=\"ai ai-osf\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/osf.io\/u9aej\/\" title=\"https:\/\/osf.io\/u9aej\/\" target=\"_blank\">https:\/\/osf.io\/u9aej\/<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1145\/3604276\" title=\"Follow DOI:10.1145\/3604276\" target=\"_blank\">doi:10.1145\/3604276<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('23','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><div class=\"tp_publication tp_publication_inproceedings\"><div class=\"tp_pub_image_left\"><img decoding=\"async\" name=\"Towards Flexible and Robust User Interface Adaptations With Multiple Objectives\" src=\"https:\/\/cix.cs.uni-saarland.de\/wp-content\/uploads\/2024\/04\/UIST23-1.jpg\" width=\"70\" alt=\"Towards Flexible and Robust User Interface Adaptations With Multiple Objectives\" \/><\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Johns, Christoph Albert;  Belo, Jo\u00e3o Marcelo Evangelista;  Feit, Anna Maria;  Klokmose, Clemens Nylandsted;  Pfeuffer, Ken<\/p><p class=\"tp_pub_title\"><a href=\"https:\/\/cix.cs.uni-saarland.de\/?page_id=420\">Towards Flexible and Robust User Interface Adaptations With Multiple Objectives<\/a> <span class=\"tp_pub_type tp_  inproceedings\">Proceedings Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_booktitle\">Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology, <\/span><span class=\"tp_pub_additional_publisher\">Association for Computing Machinery, <\/span><span class=\"tp_pub_additional_address\">San Francisco, CA, USA, <\/span><span class=\"tp_pub_additional_year\">2023<\/span>, <span class=\"tp_pub_additional_isbn\">ISBN: 9798400701320<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_22\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('22','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_22\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('22','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_22\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('22','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_22\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@inproceedings{johns2024,<br \/>\r\ntitle = {Towards Flexible and Robust User Interface Adaptations With Multiple Objectives},<br \/>\r\nauthor = {Christoph Albert Johns and Jo\u00e3o Marcelo Evangelista Belo and Anna Maria Feit and Clemens Nylandsted Klokmose and Ken Pfeuffer},<br \/>\r\nurl = {https:\/\/dl.acm.org\/doi\/abs\/10.1145\/3586183.3606799},<br \/>\r\ndoi = {10.1145\/3586183.3606799},<br \/>\r\nisbn = {9798400701320},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-01-01},<br \/>\r\nurldate = {2023-01-01},<br \/>\r\nbooktitle = {Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology},<br \/>\r\npublisher = {Association for Computing Machinery},<br \/>\r\naddress = {San Francisco, CA, USA},<br \/>\r\nseries = {UIST &#039;23},<br \/>\r\nabstract = {This paper proposes a new approach for online UI adaptation that aims to overcome the limitations of the most commonly used UI optimization method involving multiple objectives: weighted sum optimization. Weighted sums are highly sensitive to objective formulation, limiting the effectiveness of UI adaptations. We propose ParetoAdapt, an adaptation approach that uses online multi-objective optimization with a posteriori articulated preferences\u2014that is, articulation of preferences after the optimization has concluded\u2014to make UI adaptation robust to incomplete and inaccurate objective formulations. It offers users a flexible way to control adaptations by selecting from a set of Pareto optimal adaptation proposals and adjusting them to fit their needs. We showcase the feasibility and flexibility of ParetoAdapt by implementing an online layout adaptation system in a state-of-the-art 3D UI adaptation framework. We further evaluate its robustness and run-time in simulation-based experiments that allow us to systematically change the accuracy of the estimated user preferences. We conclude by discussing how our approach may impact the usability and practicality of online UI adaptations.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {inproceedings}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('22','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_22\" style=\"display:none;\"><div class=\"tp_abstract_entry\">This paper proposes a new approach for online UI adaptation that aims to overcome the limitations of the most commonly used UI optimization method involving multiple objectives: weighted sum optimization. Weighted sums are highly sensitive to objective formulation, limiting the effectiveness of UI adaptations. We propose ParetoAdapt, an adaptation approach that uses online multi-objective optimization with a posteriori articulated preferences\u2014that is, articulation of preferences after the optimization has concluded\u2014to make UI adaptation robust to incomplete and inaccurate objective formulations. It offers users a flexible way to control adaptations by selecting from a set of Pareto optimal adaptation proposals and adjusting them to fit their needs. We showcase the feasibility and flexibility of ParetoAdapt by implementing an online layout adaptation system in a state-of-the-art 3D UI adaptation framework. We further evaluate its robustness and run-time in simulation-based experiments that allow us to systematically change the accuracy of the estimated user preferences. We conclude by discussing how our approach may impact the usability and practicality of online UI adaptations.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('22','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_22\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dl.acm.org\/doi\/abs\/10.1145\/3586183.3606799\" title=\"https:\/\/dl.acm.org\/doi\/abs\/10.1145\/3586183.3606799\" target=\"_blank\">https:\/\/dl.acm.org\/doi\/abs\/10.1145\/3586183.3606799<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1145\/3586183.3606799\" title=\"Follow DOI:10.1145\/3586183.3606799\" target=\"_blank\">doi:10.1145\/3586183.3606799<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('22','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><h3 class=\"tp_h3\" id=\"tp_h3_2022\">2022<\/h3><div class=\"tp_publication tp_publication_inproceedings\"><div class=\"tp_pub_image_left\"><img decoding=\"async\" name=\"AUIT \u2013 the Adaptive User Interfaces Toolkit for Designing XR Applications\" src=\"https:\/\/cix.cs.uni-saarland.de\/wp-content\/uploads\/2023\/04\/AUIT_small.jpg\" width=\"70\" alt=\"AUIT \u2013 the Adaptive User Interfaces Toolkit for Designing XR Applications\" \/><\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\">Mathias N Lystb\u00e6k Jo\u00e3o Marcelo Evangelista Belo, Anna Maria Feit<\/p><p class=\"tp_pub_title\"><a href=\"https:\/\/cix.cs.uni-saarland.de\/?page_id=403\">AUIT \u2013 the Adaptive User Interfaces Toolkit for Designing XR Applications<\/a> <span class=\"tp_pub_type tp_  inproceedings\">Proceedings Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_booktitle\">Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology, UIST'22, <\/span><span class=\"tp_pub_additional_year\">2022<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_21\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('21','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_21\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('21','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_21\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('21','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_21\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@inproceedings{belo22,<br \/>\r\ntitle = {AUIT \u2013 the Adaptive User Interfaces Toolkit for Designing XR Applications},<br \/>\r\nauthor = {Jo\u00e3o Marcelo Evangelista Belo, Mathias N Lystb\u00e6k, Anna Maria Feit, Ken Pfeuffer, Peter K\u00e1n, Antti Oulasvirta, Kaj Gr\u00f8nb\u00e6k},<br \/>\r\nurl = {https:\/\/dl.acm.org\/doi\/fullHtml\/10.1145\/3526113.3545651<br \/>\r\nhttps:\/\/github.com\/joaobelo92\/auit},<br \/>\r\ndoi = {https:\/\/doi.org\/10.1145\/3526113.3545651},<br \/>\r\nyear  = {2022},<br \/>\r\ndate = {2022-10-29},<br \/>\r\nurldate = {2022-10-29},<br \/>\r\nbooktitle = {Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology, UIST'22},<br \/>\r\nabstract = {Adaptive user interfaces can improve experiences in Extended Reality (XR) applications by adapting interface elements according to the user's context. Although extensive work explores different adaptation policies, XR creators often struggle with their implementation, which involves laborious manual scripting. The few available tools are underdeveloped for realistic XR settings where it is often necessary to consider conflicting aspects that affect an adaptation. We fill this gap by presenting AUIT, a toolkit that facilitates the design of optimization-based adaptation policies. AUIT allows creators to flexibly combine policies that address common objectives in XR applications, such as element reachability, visibility, and consistency. Instead of using rules or scripts, specifying adaptation policies via adaptation objectives simplifies the design process and enables creative exploration of adaptations. After creators decide which adaptation objectives to use, a multi-objective solver finds appropriate adaptations in real-time. A study showed that AUIT allowed creators of XR applications to quickly and easily create high-quality adaptations.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {inproceedings}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('21','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_21\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Adaptive user interfaces can improve experiences in Extended Reality (XR) applications by adapting interface elements according to the user's context. Although extensive work explores different adaptation policies, XR creators often struggle with their implementation, which involves laborious manual scripting. The few available tools are underdeveloped for realistic XR settings where it is often necessary to consider conflicting aspects that affect an adaptation. We fill this gap by presenting AUIT, a toolkit that facilitates the design of optimization-based adaptation policies. AUIT allows creators to flexibly combine policies that address common objectives in XR applications, such as element reachability, visibility, and consistency. Instead of using rules or scripts, specifying adaptation policies via adaptation objectives simplifies the design process and enables creative exploration of adaptations. After creators decide which adaptation objectives to use, a multi-objective solver finds appropriate adaptations in real-time. A study showed that AUIT allowed creators of XR applications to quickly and easily create high-quality adaptations.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('21','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_21\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dl.acm.org\/doi\/fullHtml\/10.1145\/3526113.3545651\" title=\"https:\/\/dl.acm.org\/doi\/fullHtml\/10.1145\/3526113.3545651\" target=\"_blank\">https:\/\/dl.acm.org\/doi\/fullHtml\/10.1145\/3526113.3545651<\/a><\/li><li><i class=\"fab fa-github\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/github.com\/joaobelo92\/auit\" title=\"https:\/\/github.com\/joaobelo92\/auit\" target=\"_blank\">https:\/\/github.com\/joaobelo92\/auit<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/https:\/\/doi.org\/10.1145\/3526113.3545651\" title=\"Follow DOI:https:\/\/doi.org\/10.1145\/3526113.3545651\" target=\"_blank\">doi:https:\/\/doi.org\/10.1145\/3526113.3545651<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('21','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><h3 class=\"tp_h3\" id=\"tp_h3_2021\">2021<\/h3><div class=\"tp_publication tp_publication_inbook\"><div class=\"tp_pub_image_left\"><img decoding=\"async\" name=\"Eye Gaze Estimation and Its Applications\" src=\"https:\/\/cix.cs.uni-saarland.de\/wp-content\/uploads\/2022\/05\/9783030826802.jpg\" width=\"70\" alt=\"Eye Gaze Estimation and Its Applications\" \/><\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\">Seonwook Park Xucong Zhang, Anna Maria Feit<\/p><p class=\"tp_pub_title\"><a href=\"\">Eye Gaze Estimation and Its Applications<\/a> <span class=\"tp_pub_type tp_  inbook\">Book Chapter<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span> Yang Li, Otmar Hilliges (Ed.): <span class=\"tp_pub_additional_publisher\">Springer, <\/span><span class=\"tp_pub_additional_year\">2021<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_20\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('20','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_20\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('20','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_20\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('20','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_20\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@inbook{zhang2021,<br \/>\r\ntitle = {Eye Gaze Estimation and Its Applications},<br \/>\r\nauthor = {Xucong Zhang, Seonwook Park, Anna Maria Feit},<br \/>\r\neditor = {Yang Li, Otmar Hilliges},<br \/>\r\nurl = {https:\/\/cix.cs.uni-saarland.de\/wp-content\/uploads\/2022\/05\/Zhang2021_Chapter_EyeGazeEstimationAndItsApplica.pdf},<br \/>\r\ndoi = {https:\/\/doi.org\/10.1007\/978-3-030-82681-9_4},<br \/>\r\nyear  = {2021},<br \/>\r\ndate = {2021-11-04},<br \/>\r\npublisher = {Springer},<br \/>\r\nseries = {Artificial Intelligence for Human Computer Interaction: A Modern Approach},<br \/>\r\nabstract = {The human eye gaze is an important non-verbal cue that can unobtrusively provide information about the intention and attention of a user to enable intelligent interactive systems. Eye gaze can also be taken as input to systems as a replacement of the conventional mouse and keyboard, and can also be indicative of the cognitive state of the user. However, estimating and applying gaze in real-world applications poses significant challenges. In this chapter, we first review the development of gaze estimation methods in recent years. We especially focus on learning-based gaze estimation methods which benefit from large-scale data and deep learning methods that recently became available. Second, we discuss the challenges of using gaze estimation for real-world applications and our efforts toward making these methods easily usable for the Human-Computer Interaction community. At last, we provide two application examples, demonstrating the use of eye gaze to enable attentive and adaptive interfaces.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {inbook}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('20','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_20\" style=\"display:none;\"><div class=\"tp_abstract_entry\">The human eye gaze is an important non-verbal cue that can unobtrusively provide information about the intention and attention of a user to enable intelligent interactive systems. Eye gaze can also be taken as input to systems as a replacement of the conventional mouse and keyboard, and can also be indicative of the cognitive state of the user. However, estimating and applying gaze in real-world applications poses significant challenges. In this chapter, we first review the development of gaze estimation methods in recent years. We especially focus on learning-based gaze estimation methods which benefit from large-scale data and deep learning methods that recently became available. Second, we discuss the challenges of using gaze estimation for real-world applications and our efforts toward making these methods easily usable for the Human-Computer Interaction community. At last, we provide two application examples, demonstrating the use of eye gaze to enable attentive and adaptive interfaces.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('20','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_20\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-file-pdf\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/cix.cs.uni-saarland.de\/wp-content\/uploads\/2022\/05\/Zhang2021_Chapter_EyeGazeEstimationAndItsApplica.pdf\" title=\"https:\/\/cix.cs.uni-saarland.de\/wp-content\/uploads\/2022\/05\/Zhang2021_Chapter_EyeG[...]\" target=\"_blank\">https:\/\/cix.cs.uni-saarland.de\/wp-content\/uploads\/2022\/05\/Zhang2021_Chapter_EyeG[...]<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/https:\/\/doi.org\/10.1007\/978-3-030-82681-9_4\" title=\"Follow DOI:https:\/\/doi.org\/10.1007\/978-3-030-82681-9_4\" target=\"_blank\">doi:https:\/\/doi.org\/10.1007\/978-3-030-82681-9_4<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('20','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><div class=\"tp_publication tp_publication_conference\"><div class=\"tp_pub_image_left\"><img decoding=\"async\" name=\"Complex Interaction as Emergent Behaviour: Simulating Mid-Air Virtual Keyboard Typing using Reinforcement Learning\" src=\"https:\/\/cix.cs.uni-saarland.de\/wp-content\/uploads\/2021\/09\/simulation-150x150.png\" width=\"70\" alt=\"Complex Interaction as Emergent Behaviour: Simulating Mid-Air Virtual Keyboard Typing using Reinforcement Learning\" \/><\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Hetzel, Lorenz;  Dudley, John;  Feit, Anna Maria;  Kristensson, Per Ola<\/p><p class=\"tp_pub_title\"><a href=\"https:\/\/cix.cs.uni-saarland.de\/?page_id=452\">Complex Interaction as Emergent Behaviour: Simulating Mid-Air Virtual Keyboard Typing using Reinforcement Learning<\/a> <span class=\"tp_pub_type tp_  conference\">Conference<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">IEEE Transactions on Visualization and Computer Graphics, <\/span><span class=\"tp_pub_additional_publisher\">IEEE, <\/span><span class=\"tp_pub_additional_year\">2021<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_19\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('19','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_19\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('19','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_19\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('19','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_19\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@conference{hetzel22,<br \/>\r\ntitle = {Complex Interaction as Emergent Behaviour: Simulating Mid-Air Virtual Keyboard Typing using Reinforcement Learning},<br \/>\r\nauthor = {Lorenz Hetzel and John Dudley and Anna Maria Feit and Per Ola Kristensson},<br \/>\r\nurl = {http:\/\/pokristensson.com\/pubs\/HetzelEtAlTVCG2021.pdf},<br \/>\r\ndoi = {10.1109\/TVCG.2021.3106494},<br \/>\r\nyear  = {2021},<br \/>\r\ndate = {2021-08-27},<br \/>\r\nurldate = {2021-08-27},<br \/>\r\nbooktitle = {IEEE Transactions on Visualization and Computer Graphics},<br \/>\r\npublisher = {IEEE},<br \/>\r\nabstract = {Accurately modelling user behaviour has the potential to significantly improve the quality of human-computer interaction. Traditionally, these models are carefully hand-crafted to approximate specific aspects of well-documented user behaviour. This limits their availability in virtual and augmented reality where user behaviour is often not yet well understood. Recent efforts have demonstrated that reinforcement learning can approximate human behaviour during simple goal-oriented reaching tasks. We build on these efforts and demonstrate that reinforcement learning can also approximate user behaviour in a complex mid-air interaction task: typing on a virtual keyboard. We present the first reinforcement learning-based user model for mid-air and surface-aligned typing on a virtual keyboard. Our model is shown to replicate high-level human typing behaviour. We demonstrate that this approach may be used to augment or replace human testing during the validation and development of virtual keyboards.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {conference}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('19','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_19\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Accurately modelling user behaviour has the potential to significantly improve the quality of human-computer interaction. Traditionally, these models are carefully hand-crafted to approximate specific aspects of well-documented user behaviour. This limits their availability in virtual and augmented reality where user behaviour is often not yet well understood. Recent efforts have demonstrated that reinforcement learning can approximate human behaviour during simple goal-oriented reaching tasks. We build on these efforts and demonstrate that reinforcement learning can also approximate user behaviour in a complex mid-air interaction task: typing on a virtual keyboard. We present the first reinforcement learning-based user model for mid-air and surface-aligned typing on a virtual keyboard. Our model is shown to replicate high-level human typing behaviour. We demonstrate that this approach may be used to augment or replace human testing during the validation and development of virtual keyboards.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('19','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_19\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-file-pdf\"><\/i><a class=\"tp_pub_list\" href=\"http:\/\/pokristensson.com\/pubs\/HetzelEtAlTVCG2021.pdf\" title=\"http:\/\/pokristensson.com\/pubs\/HetzelEtAlTVCG2021.pdf\" target=\"_blank\">http:\/\/pokristensson.com\/pubs\/HetzelEtAlTVCG2021.pdf<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1109\/TVCG.2021.3106494\" title=\"Follow DOI:10.1109\/TVCG.2021.3106494\" target=\"_blank\">doi:10.1109\/TVCG.2021.3106494<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('19','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><div class=\"tp_publication tp_publication_inproceedings\"><div class=\"tp_pub_image_left\"><img decoding=\"async\" name=\"XRgonomics: Facilitating the Creation of Ergonomic 3D Interfaces\" src=\"https:\/\/cix.cs.uni-saarland.de\/wp-content\/uploads\/2021\/02\/xrgonomics2.png\" width=\"70\" alt=\"XRgonomics: Facilitating the Creation of Ergonomic 3D Interfaces\" \/><\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Belo, Jo\u00e3o;  Feit, Anna Maria;  Feuchtner, Tiare;  Gr\u00f8nb\u00e6k, Kaj<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('17','tp_links')\" style=\"cursor:pointer;\">XRgonomics: Facilitating the Creation of Ergonomic 3D Interfaces<\/a> <span class=\"tp_pub_type tp_  inproceedings\">Proceedings Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_booktitle\">Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI'21), <\/span><span class=\"tp_pub_additional_publisher\">ACM, <\/span><span class=\"tp_pub_additional_year\">2021<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_17\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('17','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_17\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('17','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_17\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('17','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_17\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@inproceedings{xrgonomics21,<br \/>\r\ntitle = {XRgonomics: Facilitating the Creation of Ergonomic 3D Interfaces},<br \/>\r\nauthor = {Jo\u00e3o Belo and Anna Maria Feit and Tiare Feuchtner and Kaj Gr\u00f8nb\u00e6k<br \/>\r\n},<br \/>\r\nurl = {https:\/\/www.researchgate.net\/publication\/349110658_XRgonomics_Facilitating_the_Creation_of_Ergonomic_3D_Interfaces#fullTextFileContent<br \/>\r\nhttps:\/\/joaomebelo.com\/#\/project\/xrgonomics},<br \/>\r\ndoi = {10.1145\/3411764.3445349},<br \/>\r\nyear  = {2021},<br \/>\r\ndate = {2021-05-08},<br \/>\r\nbooktitle = {Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI'21)},<br \/>\r\npublisher = {ACM},<br \/>\r\nabstract = {Arm discomfort is a common issue in Cross Reality applications involving prolonged mid-air interaction. Solving this problem is difficult because of the lack of tools and guidelines for 3D user interface design. Therefore, we propose a method to make existing ergonomic metrics available to creators during design by estimating the interaction cost at each reachable position in the user's environment. We present XRgonomics, a toolkit to visualize the interaction cost and make it available at runtime, allowing creators to identify UI positions that optimize users' comfort. Two scenarios show how the toolkit can support 3D UI design and dynamic adaptation of UIs based on spatial constraints. We present results from a walkthrough demonstration, which highlight the potential of XRgonomics to make ergonomics metrics accessible during the design and development of 3D UIs. Finally, we discuss how the toolkit may address design goals beyond ergonomics.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {inproceedings}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('17','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_17\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Arm discomfort is a common issue in Cross Reality applications involving prolonged mid-air interaction. Solving this problem is difficult because of the lack of tools and guidelines for 3D user interface design. Therefore, we propose a method to make existing ergonomic metrics available to creators during design by estimating the interaction cost at each reachable position in the user's environment. We present XRgonomics, a toolkit to visualize the interaction cost and make it available at runtime, allowing creators to identify UI positions that optimize users' comfort. Two scenarios show how the toolkit can support 3D UI design and dynamic adaptation of UIs based on spatial constraints. We present results from a walkthrough demonstration, which highlight the potential of XRgonomics to make ergonomics metrics accessible during the design and development of 3D UIs. Finally, we discuss how the toolkit may address design goals beyond ergonomics.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('17','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_17\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/www.researchgate.net\/publication\/349110658_XRgonomics_Facilitating_the_Creation_of_Ergonomic_3D_Interfaces#fullTextFileContent\" title=\"https:\/\/www.researchgate.net\/publication\/349110658_XRgonomics_Facilitating_the_C[...]\" target=\"_blank\">https:\/\/www.researchgate.net\/publication\/349110658_XRgonomics_Facilitating_the_C[...]<\/a><\/li><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/joaomebelo.com\/#\/project\/xrgonomics\" title=\"https:\/\/joaomebelo.com\/#\/project\/xrgonomics\" target=\"_blank\">https:\/\/joaomebelo.com\/#\/project\/xrgonomics<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1145\/3411764.3445349\" title=\"Follow DOI:10.1145\/3411764.3445349\" target=\"_blank\">doi:10.1145\/3411764.3445349<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('17','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_image_left\"><a href=\"https:\/\/cacm.acm.org\/magazines\/2021\/2\/250082-azerty-amlior\" target=\"_blank\"><img decoding=\"async\" name=\"AZERTY Am\u00e9lior\u00e9: Computational Design on a National Scale\" src=\"https:\/\/cix.cs.uni-saarland.de\/wp-content\/uploads\/2021\/02\/2021_cacm-150x150.jpg\" width=\"70\" alt=\"AZERTY Am\u00e9lior\u00e9: Computational Design on a National Scale\" \/><\/a><\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Feit, Anna Maria;  Nancel, Mathieu;  John, Maximilian;  Karrenbauer, Andreas;  Weir, Daryl;  Oulasvirta, Antti<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('3','tp_links')\" style=\"cursor:pointer;\">AZERTY Am\u00e9lior\u00e9: Computational Design on a National Scale<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Communications of the ACM, <\/span><span class=\"tp_pub_additional_volume\">vol. 64, <\/span><span class=\"tp_pub_additional_number\">no. 2, <\/span><span class=\"tp_pub_additional_year\">2021<\/span>, <span class=\"tp_pub_additional_issn\">ISSN: 0001-0782<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_3\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('3','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_3\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('3','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_3\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('3','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_3\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{10.1145\/3382035,<br \/>\r\ntitle = {AZERTY Am\u00e9lior\u00e9: Computational Design on a National Scale},<br \/>\r\nauthor = {Anna Maria Feit and Mathieu Nancel and Maximilian John and Andreas Karrenbauer and Daryl Weir and Antti Oulasvirta},<br \/>\r\nurl = {https:\/\/cacm.acm.org\/magazines\/2021\/2\/250082-azerty-amlior<br \/>\r\nhttp:\/\/norme-azerty.fr\/en<br \/>\r\n},<br \/>\r\ndoi = {10.1145\/3382035},<br \/>\r\nissn = {0001-0782},<br \/>\r\nyear  = {2021},<br \/>\r\ndate = {2021-01-01},<br \/>\r\njournal = {Communications of the ACM},<br \/>\r\nvolume = {64},<br \/>\r\nnumber = {2},<br \/>\r\npublisher = {Association for Computing Machinery},<br \/>\r\naddress = {New York, NY, USA},<br \/>\r\nabstract = {France is the first country in the world to adopt a keyboard standard informed by computational methods, improving the performance, ergonomics, and intuitiveness of the keyboard while enabling input of many more characters. We describe a human-centric approach developed jointly with stakeholders to utilize computational methods in the decision process not only to solve a well-defined problem but also to understand the design requirements, to inform subjective views, or to communicate the outcomes. To be more broadly useful, research must develop computational methods that can be used in a participatory and inclusive fashion respecting the different needs and roles of stakeholders. },<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('3','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_3\" style=\"display:none;\"><div class=\"tp_abstract_entry\">France is the first country in the world to adopt a keyboard standard informed by computational methods, improving the performance, ergonomics, and intuitiveness of the keyboard while enabling input of many more characters. We describe a human-centric approach developed jointly with stakeholders to utilize computational methods in the decision process not only to solve a well-defined problem but also to understand the design requirements, to inform subjective views, or to communicate the outcomes. To be more broadly useful, research must develop computational methods that can be used in a participatory and inclusive fashion respecting the different needs and roles of stakeholders. <\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('3','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_3\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/cacm.acm.org\/magazines\/2021\/2\/250082-azerty-amlior\" title=\"https:\/\/cacm.acm.org\/magazines\/2021\/2\/250082-azerty-amlior\" target=\"_blank\">https:\/\/cacm.acm.org\/magazines\/2021\/2\/250082-azerty-amlior<\/a><\/li><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"http:\/\/norme-azerty.fr\/en\" title=\"http:\/\/norme-azerty.fr\/en\" target=\"_blank\">http:\/\/norme-azerty.fr\/en<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1145\/3382035\" title=\"Follow DOI:10.1145\/3382035\" target=\"_blank\">doi:10.1145\/3382035<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('3','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><h3 class=\"tp_h3\" id=\"tp_h3_2020\">2020<\/h3><div class=\"tp_publication tp_publication_inproceedings\"><div class=\"tp_pub_image_left\"><a href=\"https:\/\/ait.ethz.ch\/projects\/2020\/relevance-detection\/\" target=\"_blank\"><img decoding=\"async\" name=\"Detecting Relevance during Decision-Making from Eye Movements for UI Adaptation\" src=\"https:\/\/cix.cs.uni-saarland.de\/wp-content\/uploads\/2021\/02\/etra-20-2.png\" width=\"70\" alt=\"Detecting Relevance during Decision-Making from Eye Movements for UI Adaptation\" \/><\/a><\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Feit, Anna Maria;  Vordemann, Lukas;  Park, Seonwook;  Berube, Caterina;  Hilliges, Otmar<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('6','tp_links')\" style=\"cursor:pointer;\">Detecting Relevance during Decision-Making from Eye Movements for UI Adaptation<\/a> <span class=\"tp_pub_type tp_  inproceedings\">Proceedings Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_booktitle\">Symposium on Eye Tracking Research and Applications, <\/span><span class=\"tp_pub_additional_publisher\">Association for Computing Machinery, <\/span><span class=\"tp_pub_additional_year\">2020<\/span>, <span class=\"tp_pub_additional_isbn\">ISBN: 9781450371339<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_6\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('6','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_6\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('6','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_6\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('6','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_6\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@inproceedings{feit20,<br \/>\r\ntitle = {Detecting Relevance during Decision-Making from Eye Movements for UI Adaptation},<br \/>\r\nauthor = {Anna Maria Feit and Lukas Vordemann and Seonwook Park and Caterina Berube and Otmar Hilliges},<br \/>\r\nurl = {https:\/\/ait.ethz.ch\/projects\/2020\/relevance-detection\/},<br \/>\r\ndoi = {10.1145\/3379155.3391321},<br \/>\r\nisbn = {9781450371339},<br \/>\r\nyear  = {2020},<br \/>\r\ndate = {2020-06-01},<br \/>\r\nbooktitle = {Symposium on Eye Tracking Research and Applications},<br \/>\r\npublisher = {Association for Computing Machinery},<br \/>\r\nseries = {ETRA '20 },<br \/>\r\nabstract = {This paper proposes an approach to detect information relevance during decision-making from eye movements in order to enable user interface adaptation. This is a challenging task because gaze behavior varies greatly across individual users and tasks and ground-truth data is difficult to obtain. Thus, prior work has mostly focused on simpler target-search tasks or on establishing general interest, where gaze behavior is less complex. From the literature, we identify six metrics that capture different aspects of the gaze behavior during decision-making and combine them in a voting scheme. We empirically show, that this accounts for the large variations in gaze behavior and out-performs standalone metrics. Importantly, it offers an intuitive way to control the amount of detected information, which is crucial for different UI adaptation schemes to succeed. We show the applicability of our approach by developing a room-search application that changes the visual saliency of content detected as relevant. In an empirical study, we show that it detects up to 97% of relevant elements with respect to user self-reporting, which allows us to meaningfully adapt the interface, as confirmed by participants. Our approach is fast, does not need any explicit user input and can be applied independent of task and user.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {inproceedings}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('6','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_6\" style=\"display:none;\"><div class=\"tp_abstract_entry\">This paper proposes an approach to detect information relevance during decision-making from eye movements in order to enable user interface adaptation. This is a challenging task because gaze behavior varies greatly across individual users and tasks and ground-truth data is difficult to obtain. Thus, prior work has mostly focused on simpler target-search tasks or on establishing general interest, where gaze behavior is less complex. From the literature, we identify six metrics that capture different aspects of the gaze behavior during decision-making and combine them in a voting scheme. We empirically show, that this accounts for the large variations in gaze behavior and out-performs standalone metrics. Importantly, it offers an intuitive way to control the amount of detected information, which is crucial for different UI adaptation schemes to succeed. We show the applicability of our approach by developing a room-search application that changes the visual saliency of content detected as relevant. In an empirical study, we show that it detects up to 97% of relevant elements with respect to user self-reporting, which allows us to meaningfully adapt the interface, as confirmed by participants. Our approach is fast, does not need any explicit user input and can be applied independent of task and user.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('6','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_6\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/ait.ethz.ch\/projects\/2020\/relevance-detection\/\" title=\"https:\/\/ait.ethz.ch\/projects\/2020\/relevance-detection\/\" target=\"_blank\">https:\/\/ait.ethz.ch\/projects\/2020\/relevance-detection\/<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1145\/3379155.3391321\" title=\"Follow DOI:10.1145\/3379155.3391321\" target=\"_blank\">doi:10.1145\/3379155.3391321<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('6','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><h3 class=\"tp_h3\" id=\"tp_h3_2019\">2019<\/h3><div class=\"tp_publication tp_publication_inproceedings\"><div class=\"tp_pub_image_left\"><img decoding=\"async\" name=\"SIGCHI Outstanding Dissertation Award: Assignment Problems for Optimizing Text Input\" src=\"https:\/\/cix.cs.uni-saarland.de\/wp-content\/uploads\/2021\/02\/sigchi.jpg\" width=\"70\" alt=\"SIGCHI Outstanding Dissertation Award: Assignment Problems for Optimizing Text Input\" \/><\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Feit, Anna Maria<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('14','tp_links')\" style=\"cursor:pointer;\">SIGCHI Outstanding Dissertation Award: Assignment Problems for Optimizing Text Input<\/a> <span class=\"tp_pub_type tp_  inproceedings\">Proceedings Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_booktitle\">Extended Abstracts of the SIGCHI Conference on Human Factors in Computing Systems, <\/span><span class=\"tp_pub_additional_publisher\">ACM, <\/span><span class=\"tp_pub_additional_address\">New York, NY, USA, <\/span><span class=\"tp_pub_additional_year\">2019<\/span>, <span class=\"tp_pub_additional_isbn\">ISBN: 9781450359719<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_14\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('14','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_14\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('14','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_14\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('14','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_14\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@inproceedings{10.1145\/3290607.3313773,<br \/>\r\ntitle = {SIGCHI Outstanding Dissertation Award: Assignment Problems for Optimizing Text Input},<br \/>\r\nauthor = {Anna Maria Feit},<br \/>\r\nurl = {https:\/\/doi.org\/10.1145\/3290607.3313773},<br \/>\r\ndoi = {10.1145\/3290607.3313773},<br \/>\r\nisbn = {9781450359719},<br \/>\r\nyear  = {2019},<br \/>\r\ndate = {2019-01-01},<br \/>\r\nbooktitle = {Extended Abstracts of the SIGCHI Conference on Human Factors in Computing Systems},<br \/>\r\npublisher = {ACM},<br \/>\r\naddress = {New York, NY, USA},<br \/>\r\nseries = {CHI EA '19},<br \/>\r\nabstract = {Text input methods are an integral part of our daily interaction with digital devices. However, their design poses a complex problem: for any method, we must decide which input action (a button press, a hand gesture, etc.) produces which symbol (e.g., a character or word). With only 26 symbols and input actions, there are already more than 1026 distinct solutions, making it impossible to find the best one through manual design. Prior work has shown that we can use optimization methods to search such large design spaces efficiently and automatically find a good user interface with respect to the given objectives [6]. However, work in the text entry domain has been limited mostly to the performance optimization of (soft-)keyboards (see [2] for an overview). The Ph.D. thesis [2] advances the field of text-entry optimization by enlarging the space of optimizable text-input methods and proposing new criteria for assessing their optimality. Firstly, the design problem is formulated as an assignment problem for integer programming. This enables the use of standard mathematical solvers and algorithms for efficiently finding good solutions. Then, objective functions are developed, for assessing their optimality with respect to motor performance, ergonomics, and learnability. The corresponding models extend beyond interaction with soft keyboards, to consider multi-finger input, novel sensors, and alternative form factors. In addition, the thesis illustrates how to formulate models from prior work in terms of an assignment problem, providing a coherent theoretical basis for text entry optimization. The proposed objectives are applied in the optimization of three assignment problems: text input with multi-finger gestures in mid-air [8], text input on a long piano keyboard [4], and - for a contribution to the official French keyboard standard - input of special characters via a physical keyboard [3]. Combining the proposed models offers a multi-objective optimization approach able to capture the complex cognitive and motor processes during typing. . .},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {inproceedings}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('14','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_14\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Text input methods are an integral part of our daily interaction with digital devices. However, their design poses a complex problem: for any method, we must decide which input action (a button press, a hand gesture, etc.) produces which symbol (e.g., a character or word). With only 26 symbols and input actions, there are already more than 1026 distinct solutions, making it impossible to find the best one through manual design. Prior work has shown that we can use optimization methods to search such large design spaces efficiently and automatically find a good user interface with respect to the given objectives [6]. However, work in the text entry domain has been limited mostly to the performance optimization of (soft-)keyboards (see [2] for an overview). The Ph.D. thesis [2] advances the field of text-entry optimization by enlarging the space of optimizable text-input methods and proposing new criteria for assessing their optimality. Firstly, the design problem is formulated as an assignment problem for integer programming. This enables the use of standard mathematical solvers and algorithms for efficiently finding good solutions. Then, objective functions are developed, for assessing their optimality with respect to motor performance, ergonomics, and learnability. The corresponding models extend beyond interaction with soft keyboards, to consider multi-finger input, novel sensors, and alternative form factors. In addition, the thesis illustrates how to formulate models from prior work in terms of an assignment problem, providing a coherent theoretical basis for text entry optimization. The proposed objectives are applied in the optimization of three assignment problems: text input with multi-finger gestures in mid-air [8], text input on a long piano keyboard [4], and - for a contribution to the official French keyboard standard - input of special characters via a physical keyboard [3]. Combining the proposed models offers a multi-objective optimization approach able to capture the complex cognitive and motor processes during typing. . .<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('14','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_14\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/doi.org\/10.1145\/3290607.3313773\" title=\"https:\/\/doi.org\/10.1145\/3290607.3313773\" target=\"_blank\">https:\/\/doi.org\/10.1145\/3290607.3313773<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1145\/3290607.3313773\" title=\"Follow DOI:10.1145\/3290607.3313773\" target=\"_blank\">doi:10.1145\/3290607.3313773<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('14','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><div class=\"tp_publication tp_publication_inproceedings\"><div class=\"tp_pub_image_left\"><img decoding=\"async\" name=\"How Do People Type on Mobile Devices? Observations from a Study with 37,000 Volunteers\" src=\"https:\/\/cix.cs.uni-saarland.de\/wp-content\/uploads\/2021\/02\/20_mobile-2.png\" width=\"70\" alt=\"How Do People Type on Mobile Devices? Observations from a Study with 37,000 Volunteers\" \/><\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Palin, Kseniia;  Feit, Anna Maria;  Kim, Sunjun;  Kristensson, Per Ola;  Oulasvirta, Antti<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('7','tp_links')\" style=\"cursor:pointer;\">How Do People Type on Mobile Devices? Observations from a Study with 37,000 Volunteers<\/a> <span class=\"tp_pub_type tp_  inproceedings\">Proceedings Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_booktitle\">International Conference on Human-Computer Interaction with Mobile Devices and Services, <\/span><span class=\"tp_pub_additional_publisher\">ACM, <\/span><span class=\"tp_pub_additional_address\">New York, NY, USA, <\/span><span class=\"tp_pub_additional_year\">2019<\/span>, <span class=\"tp_pub_additional_isbn\">ISBN: 9781450368254<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_7\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('7','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_7\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('7','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_7\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('7','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_7\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@inproceedings{palin19,<br \/>\r\ntitle = {How Do People Type on Mobile Devices? Observations from a Study with 37,000 Volunteers},<br \/>\r\nauthor = {Kseniia Palin and Anna Maria Feit and Sunjun Kim and Per Ola Kristensson and Antti Oulasvirta},<br \/>\r\nurl = {https:\/\/userinterfaces.aalto.fi\/typing37k\/<br \/>\r\nhttps:\/\/www.slideshare.net\/kimsunjun5\/how-do-people-type-on-mobile-devices-observations-from-a-study-with-37000-volunteers-mobilehci-2019},<br \/>\r\ndoi = {10.1145\/3338286.3340120},<br \/>\r\nisbn = {9781450368254},<br \/>\r\nyear  = {2019},<br \/>\r\ndate = {2019-01-01},<br \/>\r\nbooktitle = {International Conference on Human-Computer Interaction with Mobile Devices and Services},<br \/>\r\npublisher = {ACM},<br \/>\r\naddress = {New York, NY, USA},<br \/>\r\nseries = {MobileHCI '19},<br \/>\r\nabstract = {This paper presents a large-scale dataset on mobile text entry collected via a web-based transcription task performed by 37,370 volunteers. The average typing speed was 36.2 WPM with 2.3% uncorrected errors. The scale of the data enables powerful statistical analyses on the correlation between typing performance and various factors, such as demographics, finger usage, and use of intelligent text entry techniques. We report effects of age and finger usage on performance that correspond to previous studies. We also find evidence of relationships between performance and use of intelligent text entry techniques: auto-correct usage correlates positively with entry rates, whereas word prediction usage has a negative correlation. To aid further work on modeling, machine learning and design improvements in mobile text entry, we make the code and dataset openly available.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {inproceedings}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('7','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_7\" style=\"display:none;\"><div class=\"tp_abstract_entry\">This paper presents a large-scale dataset on mobile text entry collected via a web-based transcription task performed by 37,370 volunteers. The average typing speed was 36.2 WPM with 2.3% uncorrected errors. The scale of the data enables powerful statistical analyses on the correlation between typing performance and various factors, such as demographics, finger usage, and use of intelligent text entry techniques. We report effects of age and finger usage on performance that correspond to previous studies. We also find evidence of relationships between performance and use of intelligent text entry techniques: auto-correct usage correlates positively with entry rates, whereas word prediction usage has a negative correlation. To aid further work on modeling, machine learning and design improvements in mobile text entry, we make the code and dataset openly available.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('7','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_7\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/userinterfaces.aalto.fi\/typing37k\/\" title=\"https:\/\/userinterfaces.aalto.fi\/typing37k\/\" target=\"_blank\">https:\/\/userinterfaces.aalto.fi\/typing37k\/<\/a><\/li><li><i class=\"fab fa-slideshare\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/www.slideshare.net\/kimsunjun5\/how-do-people-type-on-mobile-devices-observations-from-a-study-with-37000-volunteers-mobilehci-2019\" title=\"https:\/\/www.slideshare.net\/kimsunjun5\/how-do-people-type-on-mobile-devices-obser[...]\" target=\"_blank\">https:\/\/www.slideshare.net\/kimsunjun5\/how-do-people-type-on-mobile-devices-obser[...]<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1145\/3338286.3340120\" title=\"Follow DOI:10.1145\/3338286.3340120\" target=\"_blank\">doi:10.1145\/3338286.3340120<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('7','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><div class=\"tp_publication tp_publication_inproceedings\"><div class=\"tp_pub_image_left\"><a href=\"https:\/\/dl.acm.org\/cms\/asset\/abab5113-2c8d-4599-95f6-d563f36b02be\/3332165.3347945.key.jpg\" target=\"_blank\"><img decoding=\"async\" name=\"Context-Aware Online Adaptation of Mixed Reality Interfaces\" src=\"https:\/\/dl.acm.org\/cms\/asset\/abab5113-2c8d-4599-95f6-d563f36b02be\/3332165.3347945.key.jpg\" width=\"70\" alt=\"Context-Aware Online Adaptation of Mixed Reality Interfaces\" \/><\/a><\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Lindlbauer, David;  Feit, Anna Maria;  Hilliges, Otmar<\/p><p class=\"tp_pub_title\"><a href=\"https:\/\/cix.cs.uni-saarland.de\/?page_id=453\">Context-Aware Online Adaptation of Mixed Reality Interfaces<\/a> <span class=\"tp_pub_type tp_  inproceedings\">Proceedings Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_booktitle\">Symposium on User Interface Software and Technology, <\/span><span class=\"tp_pub_additional_publisher\">ACM, <\/span><span class=\"tp_pub_additional_year\">2019<\/span>, <span class=\"tp_pub_additional_isbn\">ISBN: 9781450368162<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_5\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('5','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_5\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('5','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_5\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('5','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_5\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@inproceedings{lindlbauer19,<br \/>\r\ntitle = {Context-Aware Online Adaptation of Mixed Reality Interfaces},<br \/>\r\nauthor = {David Lindlbauer and Anna Maria Feit and Otmar Hilliges},<br \/>\r\nurl = {https:\/\/ait.ethz.ch\/projects\/2019\/computationalMR\/},<br \/>\r\ndoi = {10.1145\/3332165.3347945},<br \/>\r\nisbn = {9781450368162},<br \/>\r\nyear  = {2019},<br \/>\r\ndate = {2019-01-01},<br \/>\r\nurldate = {2019-01-01},<br \/>\r\nbooktitle = {Symposium on User Interface Software and Technology},<br \/>\r\npublisher = {ACM},<br \/>\r\nseries = {UIST &#039;19},<br \/>\r\nabstract = {We present an optimization-based approach for Mixed Reality (MR) systems to automatically control when and where applications are shown, and how much information they display. Currently, content creators design applications, and users then manually adjust which applications are visible and how much information they show. This choice has to be adjusted every time users switch context, i.e., whenever they switch their task or environment. Since context switches happen many times a day, we believe that MR interfaces require automation to alleviate this problem. We propose a real-time approach to automate this process based on users&#039; current cognitive load and knowledge about their task and environment. Our system adapts which applications are displayed, how much information they show, and where they are placed. We formulate this problem as a mix of rule-based decision making and combinatorial optimization which can be solved efficiently in real-time. We present a set of proof-of-concept applications showing that our approach is applicable in a wide range of scenarios. Finally, we show in a dual-task evaluation that our approach decreased secondary tasks interactions by 36%.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {inproceedings}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('5','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_5\" style=\"display:none;\"><div class=\"tp_abstract_entry\">We present an optimization-based approach for Mixed Reality (MR) systems to automatically control when and where applications are shown, and how much information they display. Currently, content creators design applications, and users then manually adjust which applications are visible and how much information they show. This choice has to be adjusted every time users switch context, i.e., whenever they switch their task or environment. Since context switches happen many times a day, we believe that MR interfaces require automation to alleviate this problem. We propose a real-time approach to automate this process based on users&#039; current cognitive load and knowledge about their task and environment. Our system adapts which applications are displayed, how much information they show, and where they are placed. We formulate this problem as a mix of rule-based decision making and combinatorial optimization which can be solved efficiently in real-time. We present a set of proof-of-concept applications showing that our approach is applicable in a wide range of scenarios. Finally, we show in a dual-task evaluation that our approach decreased secondary tasks interactions by 36%.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('5','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_5\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/ait.ethz.ch\/projects\/2019\/computationalMR\/\" title=\"https:\/\/ait.ethz.ch\/projects\/2019\/computationalMR\/\" target=\"_blank\">https:\/\/ait.ethz.ch\/projects\/2019\/computationalMR\/<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1145\/3332165.3347945\" title=\"Follow DOI:10.1145\/3332165.3347945\" target=\"_blank\">doi:10.1145\/3332165.3347945<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('5','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><h3 class=\"tp_h3\" id=\"tp_h3_2018\">2018<\/h3><div class=\"tp_publication tp_publication_inproceedings\"><div class=\"tp_pub_image_left\"><a href=\"https:\/\/umtl.cs.uni-saarland.de\/research\/projects\/selection-based-text-entry-in-virtual-reality.html\" target=\"_blank\"><\/a><\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Speicher, Marco;  Feit, Anna Maria;  Ziegler, Pascal;  Kr\u00fcger, Antonio<\/p><p class=\"tp_pub_title\"><a href=\"https:\/\/cix.cs.uni-saarland.de\/?page_id=426\">Selection-Based Text Entry in Virtual Reality<\/a> <span class=\"tp_pub_type tp_  inproceedings\">Proceedings Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_booktitle\">SIGCHI Conference on Human Factors in Computing Systems, <\/span><span class=\"tp_pub_additional_publisher\">ACM, <\/span><span class=\"tp_pub_additional_address\">New York, NY, USA, <\/span><span class=\"tp_pub_additional_year\">2018<\/span>, <span class=\"tp_pub_additional_isbn\">ISBN: 9781450356206<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_13\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_13\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_13\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('13','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_13\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@inproceedings{speicher18,<br \/>\r\ntitle = {Selection-Based Text Entry in Virtual Reality},<br \/>\r\nauthor = {Marco Speicher and Anna Maria Feit and Pascal Ziegler and Antonio Kr\u00fcger},<br \/>\r\nurl = {https:\/\/umtl.cs.uni-saarland.de\/research\/projects\/selection-based-text-entry-in-virtual-reality.html},<br \/>\r\ndoi = {10.1145\/3173574.3174221},<br \/>\r\nisbn = {9781450356206},<br \/>\r\nyear  = {2018},<br \/>\r\ndate = {2018-01-01},<br \/>\r\nurldate = {2018-01-01},<br \/>\r\nbooktitle = {SIGCHI Conference on Human Factors in Computing Systems},<br \/>\r\npublisher = {ACM},<br \/>\r\naddress = {New York, NY, USA},<br \/>\r\nseries = {CHI &#039;18},<br \/>\r\nabstract = {In recent years, Virtual Reality (VR) and 3D User Interfaces (3DUI) have seen a drastic increase in popularity, especially in terms of consumer-ready hardware and software. While the technology for input as well as output devices is market ready, only a few solutions for text input exist, and empirical knowledge about performance and user preferences is lacking. In this paper, we study text entry in VR by selecting characters on a virtual keyboard. We discuss the design space for assessing selection-based text entry in VR. Then, we implement six methods that span different parts of the design space and evaluate their performance and user preferences. Our results show that pointing using tracked hand-held controllers outperforms all other methods. Other methods such as head pointing can be viable alternatives depending on available resources. We summarize our findings by formulating guidelines for choosing optimal virtual keyboard text entry methods in VR.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {inproceedings}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_13\" style=\"display:none;\"><div class=\"tp_abstract_entry\">In recent years, Virtual Reality (VR) and 3D User Interfaces (3DUI) have seen a drastic increase in popularity, especially in terms of consumer-ready hardware and software. While the technology for input as well as output devices is market ready, only a few solutions for text input exist, and empirical knowledge about performance and user preferences is lacking. In this paper, we study text entry in VR by selecting characters on a virtual keyboard. We discuss the design space for assessing selection-based text entry in VR. Then, we implement six methods that span different parts of the design space and evaluate their performance and user preferences. Our results show that pointing using tracked hand-held controllers outperforms all other methods. Other methods such as head pointing can be viable alternatives depending on available resources. We summarize our findings by formulating guidelines for choosing optimal virtual keyboard text entry methods in VR.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_13\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/umtl.cs.uni-saarland.de\/research\/projects\/selection-based-text-entry-in-virtual-reality.html\" title=\"https:\/\/umtl.cs.uni-saarland.de\/research\/projects\/selection-based-text-entry-in-[...]\" target=\"_blank\">https:\/\/umtl.cs.uni-saarland.de\/research\/projects\/selection-based-text-entry-in-[...]<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1145\/3173574.3174221\" title=\"Follow DOI:10.1145\/3173574.3174221\" target=\"_blank\">doi:10.1145\/3173574.3174221<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('13','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><div class=\"tp_publication tp_publication_inproceedings\"><div class=\"tp_pub_image_left\"><\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Knierim, Pascal;  Schwind, Valentin;  Feit, Anna Maria;  Nieuwenhuizen, Florian;  Henze, Niels<\/p><p class=\"tp_pub_title\"><a href=\"https:\/\/cix.cs.uni-saarland.de\/?page_id=425\">Physical Keyboards in Virtual Reality: Analysis of Typing Performance and Effects of Avatar Hands<\/a> <span class=\"tp_pub_type tp_  inproceedings\">Proceedings Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_booktitle\">SIGCHI Conference on Human Factors in Computing Systems, <\/span><span class=\"tp_pub_additional_publisher\">ACM, <\/span><span class=\"tp_pub_additional_address\">New York, NY, USA, <\/span><span class=\"tp_pub_additional_year\">2018<\/span>, <span class=\"tp_pub_additional_isbn\">ISBN: 9781450356206<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_11\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('11','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_11\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('11','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_11\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('11','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_11\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@inproceedings{10.1145\/3173574.3173919,<br \/>\r\ntitle = {Physical Keyboards in Virtual Reality: Analysis of Typing Performance and Effects of Avatar Hands},<br \/>\r\nauthor = {Pascal Knierim and Valentin Schwind and Anna Maria Feit and Florian Nieuwenhuizen and Niels Henze},<br \/>\r\nurl = {https:\/\/doi.org\/10.1145\/3173574.3173919},<br \/>\r\ndoi = {10.1145\/3173574.3173919},<br \/>\r\nisbn = {9781450356206},<br \/>\r\nyear  = {2018},<br \/>\r\ndate = {2018-01-01},<br \/>\r\nurldate = {2018-01-01},<br \/>\r\nbooktitle = {SIGCHI Conference on Human Factors in Computing Systems},<br \/>\r\npublisher = {ACM},<br \/>\r\naddress = {New York, NY, USA},<br \/>\r\nseries = {CHI &#039;18},<br \/>\r\nabstract = {Entering text is one of the most common tasks when interacting with computing systems. Virtual Reality (VR) presents a challenge as neither the user&#039;s hands nor the physical input devices are directly visible. Hence, conventional desktop peripherals are very slow, imprecise, and cumbersome. We developed a apparatus that tracks the user&#039;s hands, and a physical keyboard, and visualize them in VR. In a text input study with 32 participants, we investigated the achievable text entry speed and the effect of hand representations and transparency on typing performance, workload, and presence. With our apparatus, experienced typists benefited from seeing their hands, and reach almost outside-VR performance. Inexperienced typists profited from semi-transparent hands, which enabled them to type just 5.6 WPM slower than with a regular desktop setup. We conclude that optimizing the visualization of hands in VR is important, especially for inexperienced typists, to enable a high typing performance.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {inproceedings}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('11','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_11\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Entering text is one of the most common tasks when interacting with computing systems. Virtual Reality (VR) presents a challenge as neither the user&#039;s hands nor the physical input devices are directly visible. Hence, conventional desktop peripherals are very slow, imprecise, and cumbersome. We developed a apparatus that tracks the user&#039;s hands, and a physical keyboard, and visualize them in VR. In a text input study with 32 participants, we investigated the achievable text entry speed and the effect of hand representations and transparency on typing performance, workload, and presence. With our apparatus, experienced typists benefited from seeing their hands, and reach almost outside-VR performance. Inexperienced typists profited from semi-transparent hands, which enabled them to type just 5.6 WPM slower than with a regular desktop setup. We conclude that optimizing the visualization of hands in VR is important, especially for inexperienced typists, to enable a high typing performance.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('11','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_11\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/doi.org\/10.1145\/3173574.3173919\" title=\"https:\/\/doi.org\/10.1145\/3173574.3173919\" target=\"_blank\">https:\/\/doi.org\/10.1145\/3173574.3173919<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1145\/3173574.3173919\" title=\"Follow DOI:10.1145\/3173574.3173919\" target=\"_blank\">doi:10.1145\/3173574.3173919<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('11','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><div class=\"tp_publication tp_publication_inproceedings\"><div class=\"tp_pub_image_left\"><a href=\"https:\/\/userinterfaces.aalto.fi\/136Mkeystrokes\/\" target=\"_blank\"><img decoding=\"async\" name=\"Observations on Typing from 136 Million Keystrokes\" src=\"https:\/\/cix.cs.uni-saarland.de\/wp-content\/uploads\/2021\/02\/typing_chi18.jpg\" width=\"70\" alt=\"Observations on Typing from 136 Million Keystrokes\" \/><\/a><\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Dhakal, Vivek;  Feit, Anna Maria;  Kristensson, Per Ola;  Oulasvirta, Antti<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('10','tp_links')\" style=\"cursor:pointer;\">Observations on Typing from 136 Million Keystrokes<\/a> <span class=\"tp_pub_type tp_  inproceedings\">Proceedings Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_booktitle\">SIGCHI Conference on Human Factors in Computing Systems, <\/span><span class=\"tp_pub_additional_publisher\">ACM, <\/span><span class=\"tp_pub_additional_address\">New York, NY, US, <\/span><span class=\"tp_pub_additional_year\">2018<\/span>, <span class=\"tp_pub_additional_isbn\">ISBN: 9781450356206<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_10\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_10\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_10\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_10\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@inproceedings{Dhakal2018,<br \/>\r\ntitle = {Observations on Typing from 136 Million Keystrokes},<br \/>\r\nauthor = {Vivek Dhakal and Anna Maria Feit and Per Ola Kristensson and Antti Oulasvirta},<br \/>\r\nurl = {https:\/\/userinterfaces.aalto.fi\/136Mkeystrokes\/<br \/>\r\nhttps:\/\/www.slideshare.net\/oulasvir\/observations-on-typing-from-136-million-keystrokes-presentation-by-antti-oulasvirta-at-chi2018-april-2018-montreal},<br \/>\r\ndoi = {10.1145\/3173574.3174220},<br \/>\r\nisbn = {9781450356206},<br \/>\r\nyear  = {2018},<br \/>\r\ndate = {2018-01-01},<br \/>\r\nbooktitle = {SIGCHI Conference on Human Factors in Computing Systems},<br \/>\r\njournal = {Proc. of CHI},<br \/>\r\npublisher = {ACM},<br \/>\r\naddress = {New York, NY, US},<br \/>\r\nseries = {CHI'18},<br \/>\r\nabstract = {We report on typing behaviour and performance of 168,000 volunteers in an online study. The large dataset allows de-tailed statistical analyses of keystroking patterns, linking them to typing performance. Besides reporting distributions and confirming some earlier findings, we report two new findings. First, letter pairs typed by different hands or fingers are more predictive of typing speed than, for example, letter repetitions. Second, rollover-typing, wherein the next key is pressed before the previous one is released, is surprisingly prevalent. Notwith-standing considerable variation in typing patterns, unsuper-vised clustering using normalised inter-key intervals reveals that most users can be divided into eight groups of typists that differ in performance, accuracy, hand and finger usage, and rollover. The code and dataset are released for scientific use.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {inproceedings}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_10\" style=\"display:none;\"><div class=\"tp_abstract_entry\">We report on typing behaviour and performance of 168,000 volunteers in an online study. The large dataset allows de-tailed statistical analyses of keystroking patterns, linking them to typing performance. Besides reporting distributions and confirming some earlier findings, we report two new findings. First, letter pairs typed by different hands or fingers are more predictive of typing speed than, for example, letter repetitions. Second, rollover-typing, wherein the next key is pressed before the previous one is released, is surprisingly prevalent. Notwith-standing considerable variation in typing patterns, unsuper-vised clustering using normalised inter-key intervals reveals that most users can be divided into eight groups of typists that differ in performance, accuracy, hand and finger usage, and rollover. The code and dataset are released for scientific use.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_10\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/userinterfaces.aalto.fi\/136Mkeystrokes\/\" title=\"https:\/\/userinterfaces.aalto.fi\/136Mkeystrokes\/\" target=\"_blank\">https:\/\/userinterfaces.aalto.fi\/136Mkeystrokes\/<\/a><\/li><li><i class=\"fab fa-slideshare\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/www.slideshare.net\/oulasvir\/observations-on-typing-from-136-million-keystrokes-presentation-by-antti-oulasvirta-at-chi2018-april-2018-montreal\" title=\"https:\/\/www.slideshare.net\/oulasvir\/observations-on-typing-from-136-million-keys[...]\" target=\"_blank\">https:\/\/www.slideshare.net\/oulasvir\/observations-on-typing-from-136-million-keys[...]<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1145\/3173574.3174220\" title=\"Follow DOI:10.1145\/3173574.3174220\" target=\"_blank\">doi:10.1145\/3173574.3174220<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><div class=\"tp_publication tp_publication_phdthesis\"><div class=\"tp_pub_image_left\"><a href=\"http:\/\/urn.fi\/URN:ISBN:978-952-60-8016-1\" target=\"_blank\"><img decoding=\"async\" name=\"Assignment Problems for Optimizing Text Input\" src=\"https:\/\/cix.cs.uni-saarland.de\/wp-content\/uploads\/2021\/02\/isbn9789526080161.png\" width=\"70\" alt=\"Assignment Problems for Optimizing Text Input\" \/><\/a><\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Feit, Anna Maria<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('2','tp_links')\" style=\"cursor:pointer;\">Assignment Problems for Optimizing Text Input<\/a> <span class=\"tp_pub_type tp_  phdthesis\">PhD Thesis<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_school\">Aalto University, <\/span><span class=\"tp_pub_additional_year\">2018<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_2\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('2','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_2\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('2','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_2\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('2','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_2\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@phdthesis{Feit2018,<br \/>\r\ntitle = {Assignment Problems for Optimizing Text Input},<br \/>\r\nauthor = {Anna Maria Feit},<br \/>\r\nurl = {http:\/\/urn.fi\/URN:ISBN:978-952-60-8016-1},<br \/>\r\nyear  = {2018},<br \/>\r\ndate = {2018-01-01},<br \/>\r\npages = {182 + app. 56},<br \/>\r\npublisher = {Aalto University},<br \/>\r\nschool = {Aalto University},<br \/>\r\nabstract = {Text input methods are an integral part of our daily interaction with digital devices. However, their design poses a complex problem: for any method, we must decide which input action (a button press, a hand gesture, etc.) produces which symbol (e.g., a character or word). With only 26 symbols and input actions, there are already more than 10^26 distinct solutions, making it impossible to find the best one through manual design. Prior work has shown that we can use optimization methods to search such large design spaces efficiently and automatically find the best solution for a given task and objective. However, work in this domain has been limited mostly to the performance optimization of keyboards. The Ph.D. thesis advances the field of text-entry optimization by enlarging the space of optimizable text-input methods and proposing new criteria for assessing their optimality. Firstly, the design problem is formulated as an assignment problem for integer programming. This enables the use of standard mathematical solvers and algorithms for efficiently finding good solutions. Then, objective functions are developed, for assessing their optimality with respect to motor performance, ergonomics, and learnability. The corresponding models extend beyond interaction with soft keyboards, to consider multi-finger input, novel sensors, and alternative form factors. In addition, the thesis illustrates how to formulate models from prior work in terms of an assignment problem, providing a coherent theoretical basis for text-entry optimization. The proposed objectives are applied in the optimization of three assignment problems: text input with multi-finger gestures in mid-air, text input on a long piano keyboard, and -- for a contribution to the official French keyboard standard -- input of special characters via a physical keyboard. Combining the proposed models offers a multi-objective optimization approach able to capture the complex cognitive and motor processes during typing. Finally, the dissertation discusses future work that is needed to solve the long-standing problem of finding the optimal layout for physical keyboards, in light of empirical evidence that prior models are insufficient to respond to the diverse typing strategies people employ with modern keyboards. The thesis advances the state of the art in text-entry optimization by proposing novel objective functions that quantify the performance, ergonomics and learnabilityof a text input method. The objectives presented are formulated as assignment problems, which can be solved with integer programming via standard mathematical solvers or heuristic algorithms. While the work focused on text input, the assignment problem can be used to model other design problems in HCI (e.g., how best to assign commands to UI controls or distribute UI elements across several devices), for which the same problem formulations, optimization techniques, and even models could be applied.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {phdthesis}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('2','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_2\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Text input methods are an integral part of our daily interaction with digital devices. However, their design poses a complex problem: for any method, we must decide which input action (a button press, a hand gesture, etc.) produces which symbol (e.g., a character or word). With only 26 symbols and input actions, there are already more than 10^26 distinct solutions, making it impossible to find the best one through manual design. Prior work has shown that we can use optimization methods to search such large design spaces efficiently and automatically find the best solution for a given task and objective. However, work in this domain has been limited mostly to the performance optimization of keyboards. The Ph.D. thesis advances the field of text-entry optimization by enlarging the space of optimizable text-input methods and proposing new criteria for assessing their optimality. Firstly, the design problem is formulated as an assignment problem for integer programming. This enables the use of standard mathematical solvers and algorithms for efficiently finding good solutions. Then, objective functions are developed, for assessing their optimality with respect to motor performance, ergonomics, and learnability. The corresponding models extend beyond interaction with soft keyboards, to consider multi-finger input, novel sensors, and alternative form factors. In addition, the thesis illustrates how to formulate models from prior work in terms of an assignment problem, providing a coherent theoretical basis for text-entry optimization. The proposed objectives are applied in the optimization of three assignment problems: text input with multi-finger gestures in mid-air, text input on a long piano keyboard, and -- for a contribution to the official French keyboard standard -- input of special characters via a physical keyboard. Combining the proposed models offers a multi-objective optimization approach able to capture the complex cognitive and motor processes during typing. Finally, the dissertation discusses future work that is needed to solve the long-standing problem of finding the optimal layout for physical keyboards, in light of empirical evidence that prior models are insufficient to respond to the diverse typing strategies people employ with modern keyboards. The thesis advances the state of the art in text-entry optimization by proposing novel objective functions that quantify the performance, ergonomics and learnabilityof a text input method. The objectives presented are formulated as assignment problems, which can be solved with integer programming via standard mathematical solvers or heuristic algorithms. While the work focused on text input, the assignment problem can be used to model other design problems in HCI (e.g., how best to assign commands to UI controls or distribute UI elements across several devices), for which the same problem formulations, optimization techniques, and even models could be applied.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('2','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_2\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"http:\/\/urn.fi\/URN:ISBN:978-952-60-8016-1\" title=\"http:\/\/urn.fi\/URN:ISBN:978-952-60-8016-1\" target=\"_blank\">http:\/\/urn.fi\/URN:ISBN:978-952-60-8016-1<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('2','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><div class=\"tp_publication tp_publication_inproceedings\"><div class=\"tp_pub_image_left\"><img decoding=\"async\" name=\"AdaM: Adapting Multi-User Interfaces for Collaborative Environments in Real-Time\" src=\"https:\/\/cix.cs.uni-saarland.de\/wp-content\/uploads\/2021\/02\/adam.jpg\" width=\"70\" alt=\"AdaM: Adapting Multi-User Interfaces for Collaborative Environments in Real-Time\" \/><\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Park, Seonwook;  Gebhardt, Christoph;  R\u00e4dle, Roman;  Feit, Anna Maria;  Vrzakova, Hana;  Dayama, Niraj;  Yeo, Hui-Shyong;  Klokmose, Clemens;  Quigley, Aaron;  Oulasvirta, Antti;  Hilliges, Otmar<\/p><p class=\"tp_pub_title\"><a href=\"\">AdaM: Adapting Multi-User Interfaces for Collaborative Environments in Real-Time<\/a> <span class=\"tp_pub_type tp_  inproceedings\">Proceedings Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_booktitle\">SIGCHI Conference on Human Factors in Computing Systems, <\/span><span class=\"tp_pub_additional_publisher\">ACM, <\/span><span class=\"tp_pub_additional_year\">2018<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_resource_link\"><a id=\"tp_links_sh_1\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('1','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_1\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('1','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_1\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@inproceedings{park18,<br \/>\r\ntitle = {AdaM: Adapting Multi-User Interfaces for Collaborative Environments in Real-Time},<br \/>\r\nauthor = {Seonwook Park and Christoph Gebhardt and Roman R\u00e4dle and Anna Maria Feit and Hana Vrzakova and Niraj Dayama and Hui-Shyong Yeo and Clemens Klokmose and Aaron Quigley and Antti Oulasvirta and Otmar Hilliges},<br \/>\r\nurl = {https:\/\/ait.ethz.ch\/projects\/2018\/adam\/},<br \/>\r\ndoi = {10.1145\/3173574.3173758},<br \/>\r\nyear  = {2018},<br \/>\r\ndate = {2018-01-01},<br \/>\r\nurldate = {2018-01-01},<br \/>\r\nbooktitle = {SIGCHI Conference on Human Factors in Computing Systems},<br \/>\r\npublisher = {ACM},<br \/>\r\nseries = {CHI '18},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {inproceedings}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('1','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_1\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/ait.ethz.ch\/projects\/2018\/adam\/\" title=\"https:\/\/ait.ethz.ch\/projects\/2018\/adam\/\" target=\"_blank\">https:\/\/ait.ethz.ch\/projects\/2018\/adam\/<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1145\/3173574.3173758\" title=\"Follow DOI:10.1145\/3173574.3173758\" target=\"_blank\">doi:10.1145\/3173574.3173758<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('1','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><h3 class=\"tp_h3\" id=\"tp_h3_2017\">2017<\/h3><div class=\"tp_publication tp_publication_inproceedings\"><div class=\"tp_pub_image_left\"><\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Feit, Anna Maria;  Williams, Shane;  Toledo, Arturo;  Paradiso, Ann;  Kulkarni, Harish;  Kane, Shaun;  Morris, Meredith Ringel<\/p><p class=\"tp_pub_title\"><a href=\"https:\/\/cix.cs.uni-saarland.de\/?page_id=430\">Toward everyday gaze input: Accuracy and precision of eye tracking and implications for design<\/a> <span class=\"tp_pub_type tp_  inproceedings\">Proceedings Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_booktitle\">SIGCHI Conference on Human Factors in Computing Systems, <\/span><span class=\"tp_pub_additional_publisher\">ACM, <\/span><span class=\"tp_pub_additional_address\">New York, NY, USA, <\/span><span class=\"tp_pub_additional_year\">2017<\/span>, <span class=\"tp_pub_additional_isbn\">ISBN: 9781450346559<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_15\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('15','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_15\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('15','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_15\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('15','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_15\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@inproceedings{Feit2017,<br \/>\r\ntitle = {Toward everyday gaze input: Accuracy and precision of eye tracking and implications for design},<br \/>\r\nauthor = {Anna Maria Feit and Shane Williams and Arturo Toledo and Ann Paradiso and Harish Kulkarni and Shaun Kane and Meredith Ringel Morris},<br \/>\r\nurl = {https:\/\/www.slideshare.net\/AnnaMariaFeit\/toward-everyday-gaze-input-accuracy-and-precision-of-eye-tracking-and-implications-for-design},<br \/>\r\ndoi = {10.1145\/3025453.3025599},<br \/>\r\nisbn = {9781450346559},<br \/>\r\nyear  = {2017},<br \/>\r\ndate = {2017-05-01},<br \/>\r\nurldate = {2017-05-01},<br \/>\r\nbooktitle = {SIGCHI Conference on Human Factors in Computing Systems},<br \/>\r\npublisher = {ACM},<br \/>\r\naddress = {New York, NY, USA},<br \/>\r\nabstract = {For eye tracking to become a ubiquitous part of our everyday interaction with computers, we first need to understand its limitations outside rigorously controlled labs, and develop robust applications that can be used by a broad range of users and in various environments. Toward this end, we collected eye tracking data from 80 people in a calibration-style task, using two different trackers in two lighting conditions. We found that accuracy and precision can vary between users and targets more than six-fold, and report on differences between lighting, trackers, and screen regions. We show how such data can be used to determine appropriate target sizes and to optimize the parameters of commonly used filters. We conclude with design recommendations and examples how our findings and methodology can inform the design of error-aware adaptive applications.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {inproceedings}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('15','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_15\" style=\"display:none;\"><div class=\"tp_abstract_entry\">For eye tracking to become a ubiquitous part of our everyday interaction with computers, we first need to understand its limitations outside rigorously controlled labs, and develop robust applications that can be used by a broad range of users and in various environments. Toward this end, we collected eye tracking data from 80 people in a calibration-style task, using two different trackers in two lighting conditions. We found that accuracy and precision can vary between users and targets more than six-fold, and report on differences between lighting, trackers, and screen regions. We show how such data can be used to determine appropriate target sizes and to optimize the parameters of commonly used filters. We conclude with design recommendations and examples how our findings and methodology can inform the design of error-aware adaptive applications.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('15','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_15\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fab fa-slideshare\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/www.slideshare.net\/AnnaMariaFeit\/toward-everyday-gaze-input-accuracy-and-precision-of-eye-tracking-and-implications-for-design\" title=\"https:\/\/www.slideshare.net\/AnnaMariaFeit\/toward-everyday-gaze-input-accuracy-and[...]\" target=\"_blank\">https:\/\/www.slideshare.net\/AnnaMariaFeit\/toward-everyday-gaze-input-accuracy-and[...]<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1145\/3025453.3025599\" title=\"Follow DOI:10.1145\/3025453.3025599\" target=\"_blank\">doi:10.1145\/3025453.3025599<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('15','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_image_left\"><img decoding=\"async\" name=\"Computational Support for Functionality Selection in Interaction Design\" src=\"https:\/\/cix.cs.uni-saarland.de\/wp-content\/uploads\/2021\/02\/functionality-selection.jpg\" width=\"70\" alt=\"Computational Support for Functionality Selection in Interaction Design\" \/><\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Oulasvirta, Antti;  Feit, Anna Maria;  L\u00e4hteenlahti, Perttu;  Karrenbauer, Andreas<\/p><p class=\"tp_pub_title\"><a href=\"\">Computational Support for Functionality Selection in Interaction Design<\/a> <span class=\"tp_pub_type tp_  article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">ACM Transaction on Computer-Human Interaction, <\/span><span class=\"tp_pub_additional_volume\">vol. 24, <\/span><span class=\"tp_pub_additional_number\">no. 5, <\/span><span class=\"tp_pub_additional_year\">2017<\/span>, <span class=\"tp_pub_additional_issn\">ISSN: 1073-0516<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_4\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('4','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_4\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('4','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_4\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('4','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_4\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{10.1145\/3131608,<br \/>\r\ntitle = {Computational Support for Functionality Selection in Interaction Design},<br \/>\r\nauthor = {Antti Oulasvirta and Anna Maria Feit and Perttu L\u00e4hteenlahti and Andreas Karrenbauer},<br \/>\r\nurl = {https:\/\/doi.org\/10.1145\/3131608},<br \/>\r\ndoi = {10.1145\/3131608},<br \/>\r\nissn = {1073-0516},<br \/>\r\nyear  = {2017},<br \/>\r\ndate = {2017-01-01},<br \/>\r\nurldate = {2017-01-01},<br \/>\r\njournal = {ACM Transaction on Computer-Human Interaction},<br \/>\r\nvolume = {24},<br \/>\r\nnumber = {5},<br \/>\r\npublisher = {Association for Computing Machinery},<br \/>\r\naddress = {New York, NY, USA},<br \/>\r\nabstract = {Designing interactive technology entails several objectives, one of which is identifying and selecting appropriate functionality. Given candidate functionalities such as \u201cprint,\u201d \u201cbookmark,\u201d and \u201cshare,\u201d a designer has to choose which functionalities to include and which to leave out. Such choices critically affect the acceptability, productivity, usability, and experience of the design. However, designers may overlook reasonable designs because there is an exponential number of functionality sets and multiple factors to consider. This article is the first to formally define this problem and propose an algorithmic method to support designers to explore alternative functionality sets in early stage design. Based on interviews of professional designers, we mathematically define the task of identifying functionality sets that strike the best balance among four objectives: usefulness, satisfaction, ease of use, and profitability. We develop an integer linear programming solution that can efficiently solve very large instances (set size over 1,300) on a regular computer. Further, we build on techniques of robust optimization to search for diverse and surprising functionality designs. Empirical results from a controlled study and field deployment are encouraging. Most designers rated computationally created sets to be of the comparable or superior quality than their own. Designers reported gaining better understanding of available functionalities and the design space.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('4','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_4\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Designing interactive technology entails several objectives, one of which is identifying and selecting appropriate functionality. Given candidate functionalities such as \u201cprint,\u201d \u201cbookmark,\u201d and \u201cshare,\u201d a designer has to choose which functionalities to include and which to leave out. Such choices critically affect the acceptability, productivity, usability, and experience of the design. However, designers may overlook reasonable designs because there is an exponential number of functionality sets and multiple factors to consider. This article is the first to formally define this problem and propose an algorithmic method to support designers to explore alternative functionality sets in early stage design. Based on interviews of professional designers, we mathematically define the task of identifying functionality sets that strike the best balance among four objectives: usefulness, satisfaction, ease of use, and profitability. We develop an integer linear programming solution that can efficiently solve very large instances (set size over 1,300) on a regular computer. Further, we build on techniques of robust optimization to search for diverse and surprising functionality designs. Empirical results from a controlled study and field deployment are encouraging. Most designers rated computationally created sets to be of the comparable or superior quality than their own. Designers reported gaining better understanding of available functionalities and the design space.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('4','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_4\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/doi.org\/10.1145\/3131608\" title=\"https:\/\/doi.org\/10.1145\/3131608\" target=\"_blank\">https:\/\/doi.org\/10.1145\/3131608<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1145\/3131608\" title=\"Follow DOI:10.1145\/3131608\" target=\"_blank\">doi:10.1145\/3131608<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('4','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><h3 class=\"tp_h3\" id=\"tp_h3_2016\">2016<\/h3><div class=\"tp_publication tp_publication_inproceedings\"><div class=\"tp_pub_image_left\"><a href=\"https:\/\/userinterfaces.aalto.fi\/how-we-type\/\" target=\"_blank\"><\/a><\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Feit, Anna Maria;  Weir, Daryl;  Oulasvirta, Antti<\/p><p class=\"tp_pub_title\"><a href=\"\">How We Type: Movement Strategies and Performance in Everyday Typing<\/a> <span class=\"tp_pub_type tp_  inproceedings\">Proceedings Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_booktitle\">SIGCHI Conference on Human Factors in Computing Systems, <\/span><span class=\"tp_pub_additional_publisher\">ACM, <\/span><span class=\"tp_pub_additional_address\">New York, NY, US, <\/span><span class=\"tp_pub_additional_year\">2016<\/span>, <span class=\"tp_pub_additional_isbn\">ISBN: 978-1-4503-3362-7<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_8\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('8','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_8\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('8','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_8\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('8','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_8\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@inproceedings{Feit2016,<br \/>\r\ntitle = {How We Type: Movement Strategies and Performance in Everyday Typing},<br \/>\r\nauthor = {Anna Maria Feit and Daryl Weir and Antti Oulasvirta},<br \/>\r\nurl = {https:\/\/userinterfaces.aalto.fi\/how-we-type\/},<br \/>\r\ndoi = {10.1145\/2858036.2858233},<br \/>\r\nisbn = {978-1-4503-3362-7},<br \/>\r\nyear  = {2016},<br \/>\r\ndate = {2016-01-01},<br \/>\r\nurldate = {2016-01-01},<br \/>\r\nbooktitle = {SIGCHI Conference on Human Factors in Computing Systems},<br \/>\r\npublisher = {ACM},<br \/>\r\naddress = {New York, NY, US},<br \/>\r\nseries = {CHI &#039;16},<br \/>\r\nabstract = {This paper revisits the present understanding of typing, which originates mostly from studies of trained typists using the ten-finger touch typing system. Our goal is to characterise the majority of present-day users who are untrained and employ diverse, self-taught techniques. In a transcription task, we compare self-taught typists and those that took a touch typing course. We report several differences in performance, gaze deployment and movement strategies. The most surprising finding is that self-taught typists can achieve performance levels comparable with touch typists, even when using fewer fingers. Motion capture data exposes 3 predictors of high performance: 1) unambiguous mapping (a letter is consistently pressed by the same finger), 2) active preparation of upcoming keystrokes, and 3) minimal global hand motion. We release an extensive dataset on everyday typing behavior.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {inproceedings}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('8','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_8\" style=\"display:none;\"><div class=\"tp_abstract_entry\">This paper revisits the present understanding of typing, which originates mostly from studies of trained typists using the ten-finger touch typing system. Our goal is to characterise the majority of present-day users who are untrained and employ diverse, self-taught techniques. In a transcription task, we compare self-taught typists and those that took a touch typing course. We report several differences in performance, gaze deployment and movement strategies. The most surprising finding is that self-taught typists can achieve performance levels comparable with touch typists, even when using fewer fingers. Motion capture data exposes 3 predictors of high performance: 1) unambiguous mapping (a letter is consistently pressed by the same finger), 2) active preparation of upcoming keystrokes, and 3) minimal global hand motion. We release an extensive dataset on everyday typing behavior.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('8','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_8\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/userinterfaces.aalto.fi\/how-we-type\/\" title=\"https:\/\/userinterfaces.aalto.fi\/how-we-type\/\" target=\"_blank\">https:\/\/userinterfaces.aalto.fi\/how-we-type\/<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1145\/2858036.2858233\" title=\"Follow DOI:10.1145\/2858036.2858233\" target=\"_blank\">doi:10.1145\/2858036.2858233<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('8','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><h3 class=\"tp_h3\" id=\"tp_h3_2015\">2015<\/h3><div class=\"tp_publication tp_publication_workshop\"><div class=\"tp_pub_image_left\"><\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Feit, Anna Maria;  Bachynskyi, Myroslav;  Sridhar, Srinath<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('16','tp_links')\" style=\"cursor:pointer;\">Towards Multi-Objective Optimization for UI Design<\/a> <span class=\"tp_pub_type tp_  workshop\">Workshop<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">Workshop on Principles, Techniques and Perspectives on Optimization and HCI, CHI'15, <\/span><span class=\"tp_pub_additional_address\">Seoul, Korea, <\/span><span class=\"tp_pub_additional_year\">2015<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_16\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('16','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_16\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('16','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_16\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('16','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_16\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@workshop{Feit2015_multiobjective,<br \/>\r\ntitle = {Towards Multi-Objective Optimization for UI Design},<br \/>\r\nauthor = {Anna Maria Feit and Myroslav Bachynskyi and Srinath Sridhar},<br \/>\r\nurl = {http:\/\/annafeit.de\/resources\/papers\/Multiobjective_Optimization2015.pdf},<br \/>\r\nyear  = {2015},<br \/>\r\ndate = {2015-04-01},<br \/>\r\nbooktitle = {Workshop on Principles, Techniques and Perspectives on Optimization and HCI, CHI'15},<br \/>\r\naddress = {Seoul, Korea},<br \/>\r\nabstract = {In recent years computational optimization has been applied to the problem of finding good designs for user interfaces with huge design spaces. There, designers are struggling to integrate many different objectives into the design process, such as ergonomics, learnability or performance. However, most computationally designed interfaces are optimized with respect to only one objective. In this paper we argue that multi-objective optimization is needed to improve over manual designs. We identify 8 categories that cover design principles from UI design and usability engineering. We propose a multi-objective function in form of a linear combination of these factors and discuss benefits and pitfalls of multi-objective optimization.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {workshop}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('16','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_16\" style=\"display:none;\"><div class=\"tp_abstract_entry\">In recent years computational optimization has been applied to the problem of finding good designs for user interfaces with huge design spaces. There, designers are struggling to integrate many different objectives into the design process, such as ergonomics, learnability or performance. However, most computationally designed interfaces are optimized with respect to only one objective. In this paper we argue that multi-objective optimization is needed to improve over manual designs. We identify 8 categories that cover design principles from UI design and usability engineering. We propose a multi-objective function in form of a linear combination of these factors and discuss benefits and pitfalls of multi-objective optimization.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('16','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_16\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-file-pdf\"><\/i><a class=\"tp_pub_list\" href=\"http:\/\/annafeit.de\/resources\/papers\/Multiobjective_Optimization2015.pdf\" title=\"http:\/\/annafeit.de\/resources\/papers\/Multiobjective_Optimization2015.pdf\" target=\"_blank\">http:\/\/annafeit.de\/resources\/papers\/Multiobjective_Optimization2015.pdf<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('16','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><div class=\"tp_publication tp_publication_inproceedings\"><div class=\"tp_pub_image_left\"><a href=\"http:\/\/handtracker.mpi-inf.mpg.de\/projects\/HandDexterity\/\" target=\"_blank\"><\/a><\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Sridhar, Srinath;  Feit, Anna Maria;  Theobalt, Christian;  Oulasvirta, Antti<\/p><p class=\"tp_pub_title\"><a href=\"https:\/\/cix.cs.uni-saarland.de\/?page_id=428\">Investigating the Dexterity of Multi-Finger Input for Mid-Air Text Entry<\/a> <span class=\"tp_pub_type tp_  inproceedings\">Proceedings Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_booktitle\">SIGCHI Conference on Human Factors in Computing Systems, <\/span><span class=\"tp_pub_additional_publisher\">ACM, <\/span><span class=\"tp_pub_additional_address\">New York, New York, USA, <\/span><span class=\"tp_pub_additional_year\">2015<\/span>, <span class=\"tp_pub_additional_isbn\">ISBN: 9781450331456<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_resource_link\"><a id=\"tp_links_sh_9\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_9\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('9','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_9\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@inproceedings{Sridhar2015,<br \/>\r\ntitle = {Investigating the Dexterity of Multi-Finger Input for Mid-Air Text Entry},<br \/>\r\nauthor = {Srinath Sridhar and Anna Maria Feit and Christian Theobalt and Antti Oulasvirta},<br \/>\r\nurl = {http:\/\/handtracker.mpi-inf.mpg.de\/projects\/HandDexterity\/},<br \/>\r\ndoi = {10.1145\/2702123.2702136},<br \/>\r\nisbn = {9781450331456},<br \/>\r\nyear  = {2015},<br \/>\r\ndate = {2015-01-01},<br \/>\r\nurldate = {2015-01-01},<br \/>\r\nbooktitle = {SIGCHI Conference on Human Factors in Computing Systems},<br \/>\r\npublisher = {ACM},<br \/>\r\naddress = {New York, New York, USA},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {inproceedings}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_9\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"http:\/\/handtracker.mpi-inf.mpg.de\/projects\/HandDexterity\/\" title=\"http:\/\/handtracker.mpi-inf.mpg.de\/projects\/HandDexterity\/\" target=\"_blank\">http:\/\/handtracker.mpi-inf.mpg.de\/projects\/HandDexterity\/<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1145\/2702123.2702136\" title=\"Follow DOI:10.1145\/2702123.2702136\" target=\"_blank\">doi:10.1145\/2702123.2702136<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('9','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><h3 class=\"tp_h3\" id=\"tp_h3_2014\">2014<\/h3><div class=\"tp_publication tp_publication_inproceedings\"><div class=\"tp_pub_image_left\"><a href=\"http:\/\/annafeit.de\/pianotext\" target=\"_blank\"><\/a><\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Feit, Anna Maria;  Oulasvirta, Antti<\/p><p class=\"tp_pub_title\"><a href=\"https:\/\/cix.cs.uni-saarland.de\/?page_id=427\">PianoText: Redesigning the Piano Keyboard for Text Entry<\/a> <span class=\"tp_pub_type tp_  inproceedings\">Proceedings Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_booktitle\">Conference on Designing Interactive Systems, <\/span><span class=\"tp_pub_additional_publisher\">ACM, <\/span><span class=\"tp_pub_additional_year\">2014<\/span>, <span class=\"tp_pub_additional_isbn\">ISBN: 978-1-4503-2902-6<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_18\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('18','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_18\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('18','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_18\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('18','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_18\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@inproceedings{Feit2014,<br \/>\r\ntitle = {PianoText: Redesigning the Piano Keyboard for Text Entry},<br \/>\r\nauthor = {Anna Maria Feit and Antti Oulasvirta},<br \/>\r\nurl = {http:\/\/annafeit.de\/pianotext},<br \/>\r\ndoi = {10.1145\/2598510.2598547},<br \/>\r\nisbn = {978-1-4503-2902-6},<br \/>\r\nyear  = {2014},<br \/>\r\ndate = {2014-01-01},<br \/>\r\nurldate = {2014-01-01},<br \/>\r\nbooktitle = {Conference on Designing Interactive Systems},<br \/>\r\npublisher = {ACM},<br \/>\r\nseries = {DIS &#039;14},<br \/>\r\nabstract = {Inspired by the high keying rates of skilled pianists, we study the design of piano keyboards for rapid text entry. We review the qualities of the piano as an input device, observing four design opportunities: 1) chords, 2) redundancy (more keys than letters in English), 3) the transfer of musical skill and 4) optional sound feedback. Although some have been utilized in previous text entry methods, our goal is to exploit all four in a single design. We present PianoText, a computationally designed mapping that assigns letter sequences of English to frequent note transitions of music. It allows fast text entry on any MIDI-enabled keyboard and was evaluated in two transcription typing studies. Both show an achievable rate of over 80 words per minute. This parallels the rates of expert Qwerty typists and doubles that of a previous piano-based design from the 19th century. We also design PianoText-Mini, which allows for comparable performance in a portable form factor. Informed by the studies, we estimate the upper bound of typing performance, draw implications to other text entry methods, and critically discuss outstanding design challenges.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {inproceedings}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('18','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_18\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Inspired by the high keying rates of skilled pianists, we study the design of piano keyboards for rapid text entry. We review the qualities of the piano as an input device, observing four design opportunities: 1) chords, 2) redundancy (more keys than letters in English), 3) the transfer of musical skill and 4) optional sound feedback. Although some have been utilized in previous text entry methods, our goal is to exploit all four in a single design. We present PianoText, a computationally designed mapping that assigns letter sequences of English to frequent note transitions of music. It allows fast text entry on any MIDI-enabled keyboard and was evaluated in two transcription typing studies. Both show an achievable rate of over 80 words per minute. This parallels the rates of expert Qwerty typists and doubles that of a previous piano-based design from the 19th century. We also design PianoText-Mini, which allows for comparable performance in a portable form factor. Informed by the studies, we estimate the upper bound of typing performance, draw implications to other text entry methods, and critically discuss outstanding design challenges.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('18','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_18\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"http:\/\/annafeit.de\/pianotext\" title=\"http:\/\/annafeit.de\/pianotext\" target=\"_blank\">http:\/\/annafeit.de\/pianotext<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1145\/2598510.2598547\" title=\"Follow DOI:10.1145\/2598510.2598547\" target=\"_blank\">doi:10.1145\/2598510.2598547<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('18','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><\/div><\/div>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":3,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"_links":{"self":[{"href":"https:\/\/cix.cs.uni-saarland.de\/index.php?rest_route=\/wp\/v2\/pages\/32"}],"collection":[{"href":"https:\/\/cix.cs.uni-saarland.de\/index.php?rest_route=\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/cix.cs.uni-saarland.de\/index.php?rest_route=\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/cix.cs.uni-saarland.de\/index.php?rest_route=\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/cix.cs.uni-saarland.de\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=32"}],"version-history":[{"count":6,"href":"https:\/\/cix.cs.uni-saarland.de\/index.php?rest_route=\/wp\/v2\/pages\/32\/revisions"}],"predecessor-version":[{"id":102,"href":"https:\/\/cix.cs.uni-saarland.de\/index.php?rest_route=\/wp\/v2\/pages\/32\/revisions\/102"}],"wp:attachment":[{"href":"https:\/\/cix.cs.uni-saarland.de\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=32"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}