skip to main content
10.1145/3610977.3634966acmconferencesArticle/Chapter ViewAbstractPublication PageshriConference Proceedingsconference-collections
research-article
Open access

Understanding Large-Language Model (LLM)-powered Human-Robot Interaction

Published: 11 March 2024 Publication History

Abstract

Large-language models (LLMs) hold significant promise in improving human-robot interaction, offering advanced conversational skills and versatility in managing diverse, open-ended user requests in various tasks and domains. Despite the potential to transform human-robot interaction, very little is known about the distinctive design requirements for utilizing LLMs in robots, which may differ from text and voice interaction and vary by task and context. To better understand these requirements, we conducted a user study (n = 32) comparing an LLM-powered social robot against text- and voice-based agents, analyzing task-based requirements in conversational tasks, including choose, generate, execute, and negotiate. Our findings show that LLM-powered robots elevate expectations for sophisticated non-verbal cues and excel in connection-building and deliberation, but fall short in logical communication and may induce anxiety. We provide design implications both for robots integrating LLMs and for fine-tuning LLMs for use with robots.

References

[1]
Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Hausman, et al. 2022. Do as i can, not as i say: Grounding language in robotic affordances. arXiv preprint arXiv:2204.01691 (2022).
[2]
Alexander Mois Aroyo, Francesco Rea, and Alessandra Sciutti. 2017. Will You Rely on a Robot to Find a Treasure?. In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction (Vienna, Austria) (HRI '17). Association for Computing Machinery, New York, NY, USA, 71--72. https://doi.org/10.1145/3029798.3038394
[3]
Wilma A Bainbridge, Justin Hart, Elizabeth S Kim, and Brian Scassellati. 2008. The effect of presence on human-robot interaction. In RO-MAN 2008-The 17th IEEE International Symposium on Robot and Human Interactive Communication. IEEE, 701--706.
[4]
Wilma A Bainbridge, Justin W Hart, Elizabeth S Kim, and Brian Scassellati. 2011. The benefits of interactions with physically present robots over video-displayed agents. International Journal of Social Robotics, Vol. 3 (2011), 41--52.
[5]
Christoph Bartneck, Elizabeth Croft, and Susana Zoghbi. [n.,d.]. Measurement Instruments for the Anthropomorphism, Animacy, Likeability, Perceived Intelligence, and Perceived Safety of Robots., Vol. 1, 1 ( [n.,d.]), 71--81. https://doi.org/10.1007/s12369-008-0001--3
[6]
Timothy W Bickmore and Rosalind W Picard. 2005. Establishing and maintaining long-term human-computer relationships. ACM Transactions on Computer-Human Interaction (TOCHI), Vol. 12, 2 (2005), 293--327.
[7]
Erik Billing, Julia Ros�n, and Maurice Lamb. 2023. Language models for human-robot interaction. In ACM/IEEE International Conference on Human-Robot Interaction, March 13--16, 2023, Stockholm, Sweden. ACM Digital Library, 905--906.
[8]
Gestures in human-robot interaction. Ph.,D. Dissertation. Humboldt-Universit�t zu Berlin, Mathematisch-Naturwissenschaftliche Fakult�t. https://doi.org/10.18452/17705
[9]
Cynthia Breazeal, Kerstin Dautenhahn, and Takayuki Kanda. 2016. Social robotics. Springer handbook of robotics (2016), 1935--1972.
[10]
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, Vol. 33 (2020), 1877--1901.
[11]
Neeraj Cherakara, Finny Varghese, Sheena Shabana, Nivan Nelson, Abhiram Karukayil, Rohith Kulothungan, Mohammed Afil Farhan, Birthe Nesset, Meriam Moujahid, Tanvi Dinkar, et al. 2023. FurChat: An Embodied Conversational Agent using LLMs, Combining Open and Closed-Domain Dialogue with Facial Expressions. arXiv preprint arXiv:2308.15214 (2023).
[12]
Vijay Chidambaram, Yueh-Hsuan Chiang, and Bilge Mutlu. 2012. Designing persuasive robots: how robots might persuade people using vocal and nonverbal cues. In Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction. 293--300.
[13]
Victoria Clarke and Virginia Braun. 2014. Thematic analysis. In Encyclopedia of critical psychology. Springer, 1947--1952.
[14]
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training Verifiers to Solve Math Word Problems. arxiv: 2110.14168 [cs.LG]
[15]
Jessica T DeCuir-Gunby, Patricia L Marshall, and Allison W McCulloch. 2011. Developing and using a codebook for the analysis of interview data: An example from a professional development research project. Field methods, Vol. 23, 2 (2011), 136--155.
[16]
Eric Deng, Bilge Mutlu, Maja J Mataric, et al. 2019. Embodiment in socially interactive robots. Foundations and Trends� in Robotics, Vol. 7, 4 (2019), 251--356.
[17]
Danny Driess, Fei Xia, Mehdi S. M. Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, Wenlong Huang, Yevgen Chebotar, Pierre Sermanet, Daniel Duckworth, Sergey Levine, Vincent Vanhoucke, Karol Hausman, Marc Toussaint, Klaus Greff, Andy Zeng, Igor Mordatch, and Pete Florence. 2023. PaLM-E: An Embodied Multimodal Language Model. arxiv: 2303.03378 [cs.LG]
[18]
Yingqiang Ge, Wenyue Hua, Kai Mei, Jianchao Ji, Juntao Tan, Shuyuan Xu, Zelong Li, and Yongfeng Zhang. 2023. OpenAGI: When LLM Meets Domain Experts. arxiv: 2304.04370 [cs.AI]
[19]
Google. 2023. Google Cloud Services--Speech to text. "Accessed = 09--29--2023".
[20]
Guy Hoffman, Oren Zuckerman, Gilad Hirschberger, Michal Luria, and Tal Shani Sherman. 2015. Design and evaluation of a peripheral robotic conversation companion. In Proceedings of the tenth annual ACM/IEEE international conference on human-robot interaction. 3--10.
[21]
Laura Hoffmann and Nicole C. Kr�mer. 2013. Investigating the effects of physical and virtual embodiment in task-oriented and conversational contexts. International Journal of Human-Computer Studies, Vol. 71, 7 (2013), 763--774. https://doi.org/10.1016/j.ijhcs.2013.04.007
[22]
Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-Efficient Transfer Learning for NLP. In Proceedings of the 36th International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 97), Kamalika Chaudhuri and Ruslan Salakhutdinov (Eds.). PMLR, 2790--2799. https://proceedings.mlr.press/v97/houlsby19a.html
[23]
Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, and Weizhu Chen. 2021. LoRA: Low-Rank Adaptation of Large Language Models. CoRR, Vol. abs/2106.09685 (2021). showeprint[arXiv]2106.09685 https://arxiv.org/abs/2106.09685
[24]
Chien-Ming Huang and Bilge Mutlu. 2013. Modeling and Evaluating Narrative Gestures for Humanlike Robots. In Robotics: Science and Systems, Vol. 2. Citeseer.
[25]
Bahar Irfan, Sanna-Mari Kuoppam�ki, and Gabriel Skantze. 2023. Between Reality and Delusion: Challenges of Applying Large Language Models to Companion Robots for Open-Domain Dialogues with Older Adults. https://doi.org/10.21203/rs.3.rs-2884789/v1
[26]
Jesin James, Catherine Inez Watson, and Bruce MacDonald. 2018. Artificial empathy in social robots: An analysis of emotions in speech. In 2018 27th IEEE International symposium on robot and human interactive communication (RO-MAN). IEEE, 632--637.
[27]
Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. 2023. Survey of Hallucination in Natural Language Generation. ACM Comput. Surv., Vol. 55, 12, Article 248 (mar 2023), bibinfonumpages38 pages. https://doi.org/10.1145/3571730
[28]
Younbo Jung and Kwan Min Lee. 2004. Effects of physical embodiment on social presence of social robots. Proceedings of PRESENCE, Vol. 2004 (2004), 80--87.
[29]
Weslie Khoo, Long-Jing Hsu, Kyrie Jig Amon, Pranav Vijay Chakilam, Wei-Chu Chen, Zachary Kaufman, Agness Lungu, Hiroki Sato, Erin Seliger, Manasi Swaminathan, Katherine M. Tsui, David J. Crandall, and Selma Sabanović. 2023. Spill the Tea: When Robot Conversation Agents Support Well-Being for Older Adults. In Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction (Stockholm, Sweden) (HRI '23). Association for Computing Machinery, New York, NY, USA, 178--182. https://doi.org/10.1145/3568294.3580067
[30]
Krishna Kodur, Manizheh Zand, Matthew Tognotti, Cinthya Jauregui, and Maria Kyrarini. 2023. Structured and Unstructured Speech2Action Frameworks for Human-Robot Collaboration: A User Study. (2023).
[31]
Guy Laban, Jean-Noël George, Val Morrison, and Emily S Cross. 2020. Tell me more! Assessing interactions with social robots from speech. Paladyn, Journal of Behavioral Robotics, Vol. 12, 1 (2020), 136--159.
[32]
Christine P Lee, Bengisu Cagiltay, and Bilge Mutlu. 2022. The unboxing experience: Exploration and design of initial interactions between children and social robots. In Proceedings of the 2022 CHI conference on human factors in computing systems. 1--14.
[33]
Yoon Kyung Lee, Yoonwon Jung, Gyuyi Kang, and Sowon Hahn. 2023. Developing Social Robots with Empathetic Non-Verbal Cues Using Large Language Models. arxiv: 2308.16529 [cs.RO]
[34]
Iolanda Leite, Carlos Martinho, and Ana Paiva. 2013. Social robots for long-term interaction: a survey. International Journal of Social Robotics, Vol. 5 (2013), 291--308.
[35]
Jamy Li. 2015. The benefit of being physically present: A survey of experimental works comparing copresent robots, telepresent robots and virtual agents. International Journal of Human-Computer Studies, Vol. 77 (2015), 23--37.
[36]
Haokun Liu, Derek Tam, Mohammed Muqeeth, Jay Mohta, Tenghao Huang, Mohit Bansal, and Colin A Raffel. 2022. Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning. Advances in Neural Information Processing Systems, Vol. 35 (2022), 1950--1965.
[37]
Zhengliang Liu, Aoxiao Zhong, Yiwei Li, Longtao Yang, Chao Ju, Zihao Wu, Chong Ma, Peng Shu, Cheng Chen, Sekeun Kim, Haixing Dai, Lin Zhao, Dajiang Zhu, Jun Liu, Wei Liu, Dinggang Shen, Quanzheng Li, Tianming Liu, and Xiang Li. 2024. Tailoring Large Language Models to�Radiology: A Preliminary Approach to�LLM Adaptation for�a�Highly Specialized Domain. In Machine Learning in Medical Imaging, Xiaohuan Cao, Xuanang Xu, Islem Rekik, Zhiming Cui, and Xi Ouyang (Eds.). Springer Nature Switzerland, Cham, 464--473.
[38]
Arnold M Lund. 2001. Measuring usability with the use questionnaire12. Usability interface, Vol. 8, 2 (2001), 3--6.
[39]
Michal Luria, Jodi Forlizzi, and Jessica Hodgins. 2018. The effects of eye design on the perception of social robots. In 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). IEEE, 1032--1037.
[40]
Michal Luria, Guy Hoffman, Benny Megidish, Oren Zuckerman, and Sung Park. 2016. Designing Vyo, a robotic Smart Home assistant: Bridging the gap between device and social agent. In 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). IEEE, 1019--1025.
[41]
Michal Luria, Guy Hoffman, and Oren Zuckerman. 2017. Comparing social robot, screen and voice interfaces for smart-home control. In Proceedings of the 2017 CHI conference on human factors in computing systems. 580--628.
[42]
Michal Luria, Samantha Reig, Xiang Zhi Tan, Aaron Steinfeld, Jodi Forlizzi, and John Zimmerman. 2019. Re-Embodiment and Co-Embodiment: Exploration of social presence for robots and conversational agents. In Proceedings of the 2019 on Designing Interactive Systems Conference. 633--644.
[43]
Nora McDonald, Sarita Schoenebeck, and Andrea Forte. 2019. Reliability and Inter-rater Reliability in Qualitative Research: Norms and Guidelines for CSCW and HCI Practice. Proceedings of the ACM on Human-Computer Interaction, Vol. 3 (11 2019), 1--23. https://doi.org/10.1145/3359174
[44]
Joseph Edward McGrath. 1984. Groups: Interaction and performance. Vol. 14. Prentice-Hall Englewood Cliffs, NJ.
[45]
Jonathan Mumm and Bilge Mutlu. 2011. Human-Robot Proxemics: Physical and Psychological Distancing in Human-Robot Interaction. In Proceedings of the 6th International Conference on Human-Robot Interaction (Lausanne, Switzerland) (HRI '11). Association for Computing Machinery, New York, NY, USA, 331--338. https://doi.org/10.1145/1957656.1957786
[46]
Bilge Mutlu. 2021. The virtual and the physical: two frames of mind. iScience, Vol. 24, 2 (2021), 101965. https://doi.org/10.1016/j.isci.2020.101965
[47]
Bilge Mutlu, Takayuki Kanda, Jodi Forlizzi, Jessica Hodgins, and Hiroshi Ishiguro. 2012. Conversational gaze mechanisms for humanlike robots. ACM Transactions on Interactive Intelligent Systems (TiiS), Vol. 1, 2 (2012), 1--33.
[48]
Bilge Mutlu, Steven Osman, Jodi Forlizzi, Jessica Hodgins, and Sara Kiesler. 2006. Task structure and user attributes as elements of human-robot interaction design. In ROMAN 2006-The 15th IEEE International Symposium on Robot and Human Interactive Communication. IEEE, 74--79.
[49]
Bilge Mutlu, Toshiyuki Shiwa, Takayuki Kanda, Hiroshi Ishiguro, and Norihiro Hagita. 2009. Footing in human-robot conversations: how robots might shape participant roles using gaze cues. In Proceedings of the 4th ACM/IEEE international conference on Human robot interaction. 61--68.
[50]
Teresa Onorati, �lvaro Castro-Gonz�lez, Javier Cruz del Valle, Paloma D�az, and Jos� Carlos Castillo. 2023. Creating Personalized Verbal Human-Robot Interactions Using LLM with�the�Robot Mini. In Proceedings of the 15th International Conference on Ubiquitous Computing & Ambient Intelligence (UCAmI 2023), Jos� Bravo and Gabriel Urz�iz (Eds.). Springer Nature Switzerland, Cham, 148--159.
[51]
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. arxiv: 2203.02155 [cs.CL]
[52]
Baolin Peng, Michel Galley, Pengcheng He, Hao Cheng, Yujia Xie, Yu Hu, Qiuyuan Huang, Lars Liden, Zhou Yu, Weizhu Chen, and Jianfeng Gao. 2023. Check Your Facts and Try Again: Improving Large Language Models with External Knowledge and Automated Feedback. arxiv: 2302.12813 [cs.CL]
[53]
Aaron Powers, Sara Kiesler, Susan Fussell, and Cristen Torrey. 2007. Comparing a computer agent with a humanoid robot. In Proceedings of the ACM/IEEE international conference on Human-robot interaction. 145--152.
[54]
Samantha Reig, Elizabeth J Carter, Terrence Fong, Aaron Steinfeld, and Jodi Forlizzi. 2022. Perceptions of explicitly vs. implicitly relayed commands between a robot and smart speaker. In 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 1012--1016.
[55]
Samantha Reig, Jodi Forlizzi, and Aaron Steinfeld. 2019. Leveraging robot embodiment to facilitate trust and smoothness. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 742--744.
[56]
Aldebaran Robotics. 2023 a. Animated Speech. "Accessed = 09--29--2023".
[57]
Aldebaran Robotics. 2023 b. Audio Device API. "Accessed = 09--29--2023".
[58]
Soft Bank Robotics. 2023 c. Pepper Robot. "Accessed = 09--29--2023".
[59]
Eduardo Rodriguez-Lizundia, Samuel Marcos, Eduardo Zalama, Jaime G�mez-Garc�a-Bermejo, and Alfonso Gordaliza. 2015. A bellboy robot: Study of the effects of robot behaviour on user engagement and comfort. International Journal of Human-Computer Studies, Vol. 82 (2015), 83--95. https://doi.org/10.1016/j.ijhcs.2015.06.001
[60]
Maha Salem, Stefan Kopp, Ipke Wachsmuth, Katharina Rohlfing, and Frank Joublin. 2012. Generation and evaluation of communicative robot gesture. International Journal of Social Robotics, Vol. 4 (2012), 201--217.
[61]
Guido Schillaci, Savs a Bodirovz a, and Verena Vanessa Hafner. 2013. Evaluating the effect of saliency detection and attention manipulation in human-robot interaction. International Journal of Social Robotics, Vol. 5 (2013), 139--152.
[62]
Elena M�rquez Segura, Michael Kriegel, Ruth Aylett, Amol Deshmukh, and Henriette Cramer. 2012. How do you like me in this: User embodiment preferences for companion agents. In Intelligent Virtual Agents: 12th International Conference, IVA 2012, Santa Cruz, CA, USA, September, 12--14, 2012. Proceedings 12. Springer, 112--125.
[63]
Stela H. Seo, Denise Geiskkovitch, Masayuki Nakane, Corey King, and James E. Young. 2015. Poor Thing! Would You Feel Sorry for a Simulated Robot? A Comparison of Empathy toward a Physical and a Simulated Robot. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction (Portland, Oregon, USA) (HRI '15). Association for Computing Machinery, New York, NY, USA, 125--132. https://doi.org/10.1145/2696454.2696471
[64]
Gabriel J Serfaty, Virgil O Barnard, and Joseph P Salisbury. 2023. Generative Facial Expressions and Eye Gaze Behavior from Prompts for Multi-Human-Robot Interaction. In Adjunct Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology . Association for Computing Machinery, New York, NY, USA, Article 13, bibinfonumpages3 pages. https://doi.org/10.1145/3586182.3616623
[65]
Jianhao Shen, Yichun Yin, Lin Li, Lifeng Shang, Xin Jiang, Ming Zhang, and Qun Liu. 2021. Generate & Rank: A Multi-task Framework for Math Word Problems. arxiv: 2109.03034 [cs.CL]
[66]
Ishika Singh, Valts Blukis, Arsalan Mousavian, Ankit Goyal, Danfei Xu, Jonathan Tremblay, Dieter Fox, Jesse Thomason, and Animesh Garg. 2023. Progprompt: Generating situated robot task plans using large language models. In 2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 11523--11530.
[67]
Leila Takayama, Victoria Groom, and Clifford Nass. 2009. I'm Sorry, Dave: I'm Afraid i Won't Do That: Social Aspects of Human-Agent Conflict. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Boston, MA, USA) (CHI '09). Association for Computing Machinery, New York, NY, USA, 2099--2108. https://doi.org/10.1145/1518701.1519021
[68]
Alex Tamkin, Miles Brundage, Jack Clark, and Deep Ganguli. 2021. Understanding the capabilities, limitations, and societal impact of large language models. arXiv preprint arXiv:2102.02503 (2021).
[69]
Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. 2023. Stanford alpaca: An instruction-following llama model.
[70]
Sai Vemprala, Rogerio Bonatti, Arthur Bucker, and Ashish Kapoor. 2023. Chatgpt for robotics: Design principles and model abilities. Microsoft Auton. Syst. Robot. Res, Vol. 2 (2023), 20.
[71]
Joshua Wainer, David J Feil-Seifer, Dylan A Shell, and Maja J Mataric. 2006. The role of physical embodiment in human-robot interaction. In ROMAN 2006-The 15th IEEE International Symposium on Robot and Human Interactive Communication. IEEE, 117--122.
[72]
Joshua Wainer, David J Feil-Seifer, Dylan A Shell, and Maja J Mataric. 2007. Embodiment and human-robot interaction: A task-based perspective. In RO-MAN 2007-The 16th IEEE International Symposium on Robot and Human Interactive Communication. IEEE, 872--877.
[73]
Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. 2023. Self-Instruct: Aligning Language Models with Self-Generated Instructions. arxiv: 2212.10560 [cs.CL]
[74]
Takato Yamazaki, Katsumasa Yoshikawa, Toshiki Kawamoto, Tomoya Mizumoto, Masaya Ohagi, and Toshinori Sato. 2023. Building a hospitable and reliable dialogue system for android robots: a scenario-based approach with large language models. Advanced Robotics, Vol. 37, 21 (2023), 1364--1381.
[75]
Yang Ye, Hengxu You, and Jing Du. 2023. Improved Trust in Human-Robot Collaboration With ChatGPT. IEEE Access, Vol. 11 (2023), 55748--55754. https://doi.org/10.1109/ACCESS.2023.3282111
[76]
Tom Ziemke. 2013. What's that thing called embodiment? In Proceedings of the 25th Annual Cognitive Science Society. Psychology Press, 1305--1310.
[77]
Zoom. 2023. Video Conferencing Platform. "Accessed = 09--29--2023". io

Cited By

View all
  • (2024)Enhancing user experience and trust in advanced LLM-based conversational agentsComputing and Artificial Intelligence10.59400/cai.v2i2.14672:2(1467)Online publication date: 17-Aug-2024
  • (2024)The Future of Intelligent Healthcare: A Systematic Analysis and Discussion on the Integration and Impact of Robots Using Large Language Models for HealthcareRobotics10.3390/robotics1308011213:8(112)Online publication date: 23-Jul-2024
  • (2024)Large language models can help boost food production, but be mindful of their risksFrontiers in Artificial Intelligence10.3389/frai.2024.13261537Online publication date: 25-Oct-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
HRI '24: Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction
March 2024
982 pages
ISBN:9798400703225
DOI:10.1145/3610977
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 11 March 2024

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. human-robot interaction
  2. large language models
  3. social robots

Qualifiers

  • Research-article

Funding Sources

  • Sheldon B. and Marianne S. Lubar Professorship, an H.I. Romnes Faculty Fellowship, and a National Science Foundation award

Conference

HRI '24
Sponsor:

Acceptance Rates

Overall Acceptance Rate 268 of 1,124 submissions, 24%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)2,812
  • Downloads (Last 6 weeks)648
Reflects downloads up to 21 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Enhancing user experience and trust in advanced LLM-based conversational agentsComputing and Artificial Intelligence10.59400/cai.v2i2.14672:2(1467)Online publication date: 17-Aug-2024
  • (2024)The Future of Intelligent Healthcare: A Systematic Analysis and Discussion on the Integration and Impact of Robots Using Large Language Models for HealthcareRobotics10.3390/robotics1308011213:8(112)Online publication date: 23-Jul-2024
  • (2024)Large language models can help boost food production, but be mindful of their risksFrontiers in Artificial Intelligence10.3389/frai.2024.13261537Online publication date: 25-Oct-2024
  • (2024)Empathy-GPT: Leveraging Large Language Models to Enhance Emotional Empathy and User Engagement in Embodied Conversational AgentsAdjunct Proceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3672539.3686729(1-3)Online publication date: 13-Oct-2024
  • (2024)The AI-DEC: A Card-based Design Method for User-centered AI ExplanationsProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3661576(1010-1028)Online publication date: 1-Jul-2024
  • (2024)"This really lets us see the entire world:" Designing a conversational telepresence robot for homebound older adultsProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3660710(2450-2467)Online publication date: 1-Jul-2024
  • (2024)Creative Commuter: Towards Designing Moments for Idea Generation and Incubation during the CommuteAdjunct Proceedings of the 16th International Conference on Automotive User Interfaces and Interactive Vehicular Applications10.1145/3641308.3685022(51-55)Online publication date: 22-Sep-2024
  • (2024)Action2Code: Transforming Video Demonstrations into Sequential Robotic Instructions2024 21st International Conference on Ubiquitous Robots (UR)10.1109/UR61395.2024.10597493(92-99)Online publication date: 24-Jun-2024
  • (2024)Leveraging LLMs for Unstructured Direct Elicitation of Decision RulesCustomer Needs and Solutions10.1007/s40547-024-00151-411:1Online publication date: 23-Oct-2024
  • (2024)HistNERo: Historical Named Entity Recognition for the Romanian LanguageDocument Analysis and Recognition - ICDAR 202410.1007/978-3-031-70543-4_8(126-144)Online publication date: 9-Sep-2024

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media