Read more
Sound in human-robot interaction currently encompasses a wide range of approaches and methodologies that is not easily classified, analyzed, or compared between projects.
List of contents
1. The Landscape of Sound and Robotics
Part 1: Speech 2. Effects of Number of Voices and Voice Type on Storytelling Experience and Robot Perception 3. Learning from Humans: How Research on Vocalizations can Inform the Conceptualization of Robot Sound 4. Talk To Me: Using speech for loss-of-trust mitigation in social robots 5. Grounding Spoken Language
Part 2: Non-semantic Audio 6. Consequential Sounds and their Effect on Human Robot Interaction 7. Robot Sound in Distributed Audio Environments 8. Navigating Robot Sonification: Exploring Four Approaches to Sonification in Autonomous Vehicles 9. Towards Improving User Experience and Shared Task Performance with Mobile Robots through Parameterized Nonverbal State Sonification 10. How Happy Should I Be? Leveraging Neuroticism and Extraversion for Music-Driven Emotional Interaction in Robotics 11. Augmenting a Group of Task-Driven Robotic Arms with Emotional Musical Prosody
Part 3: Robotic Musicianship and Musical Robots 12. Musical Robots : Overview and Methods for Evaluation 13. Robotic Dancing, Emotional Gestures and Prosody: A Framework for Gestures of Three Robotic Platforms 14. Dead Stars and 'Live' Singers: Posthumous 'Holographic' Performances in the US and Japan
About the author
Richard Savery is a Research Fellow at Macquarie University, Australia, working at the intersection of sound and robotics. He completed his doctorate in Music Technology at Georgia Tech, USA, focusing on the use of non-verbal audio for improved human-robot interaction.
Summary
Sound in human-robot interaction currently encompasses a wide range of approaches and methodologies that is not easily classified, analyzed, or compared between projects.