Tank
the Roboceptionist is the most recent addition to Carnegie Mellon’s
Social Robots Project. A permanent installation in the entranceway
to Newell-Simon Hall, the robot combines useful functionality—giving
directions, looking up weather forecasts, etc.—with an interesting
and compelling character. We are using Tank to investigate human-robot
social interaction, especially long-term human-robot “relationships.”
We have found that many visitors continue to interact with Tank's
predecessor, Valerie the robot on a daily basis, but that few of
the individual interactions last for more than 30 seconds. Our analysis
of the data has indicated several design decisions that should facilitate
more natural human-robot interactions.
While
many researchers are investigating human-robot social interaction,
one area that remains relatively unexplored is that of continued
long-term interaction. The Roboceptionist (“robot receptionist”)
Project, part of the Social Robots Project, is investigating how
a social robot can remain compelling over a long period of time—days,
weeks, and even years. Our approach is to create a robot that can
provide useful services, but that also exhibits personality and
character. The robot was designed for ease of interaction without
requiring any training or expertise, and to be compelling enough
to encourage multiple visits over extended periods of time. The
character we have designed, named Tank, is built from a mobile base
with a moving flat-panel monitor mountedon top, which displays a
graphical human-like face. Tank remains stationary inside a small
booth near the main entrance of Newell-Simon Hall at Carnegie Mellon
University. Anyone who walks through the building, including students,
faculty, and visitors, can interact with the robot.
The
Social Robots Project began with the goal of investigating human-robot
social interaction. Experiments with the robot Vikia studied the
effects of attentive movement and an animated face on people’s
willingness to engage in a short interaction with a robot [1]. These
experiments confirmed the group’s intuitions that both movement
and a recognizable face have a positive impact on human-robot social
interaction. Grace, a joint project by our group and a number of
other research institutions, has participated in the AAAI robot
challenge for several years [2]. The challenge requires a robot
to register for the conference, find the room it is scheduled to
speak in, and give a short talk about its own capabilities. Social
interaction is vital to performing these tasks successfully. Grace
uses conversational capabilities similar to Tank’s to interact
with workers at the registration desk in a socially appropriate
manner. A number of other research groups are also using robots
to explore social interaction. Kismet [3] and Sparky [4] both used
facial expression and movement to interact with humans. Unlike Tank,
these robots engaged in only short-term, nonverbal interactions,
and their purpose was not to provide users with useful information.
On the other hand, a number of robots have been designed over the
years to serve as tour guides for museum visitors [5]–[7].
Like Tank and Valerie, their purpose is to inform as well as to
entertain. These robots also use speech capabilities to provide
users with useful information, and they use facial and emotional
expressions to improve the quality of interaction. However, these
interactions are fairly structured and primarily one-way—people
do not actively converse with the robots. The Nursebot [8] is another
robot that uses social competence to improve task performance. That
project’s goals were similar to our own in that it aimed to
create a robot that engaged in repeated interactions with people
over an extended period of time. Robovie, an interactive humanoid
robot, has been used in long-term interaction studies with children,
but its designers noted that it “failed to keep most of the
children’s interest after the 1st week” [9]. With Tank,
we hope to maintain interest over longer periods.
The
Roboceptionist Project is the product of a collaboration between
the Robotics Institute and the School of Drama at Carnegie Mellon.
Planning and design was conducted for almost a year prior to Valerie’s
(Tank’s predecessor) deployment. Some of the major design
decisions are detailed below.
We
wanted the robot to be familiar and non-threatening to people who
access the building (primarily non-roboticists).We chose a receptionist
as Valerie’s role for several reasons:
• Receptionists have frequent interaction with the public,
and people have well-understood expectations for how to interact
with receptionists.
• Tank and Valerie are capable of handling some of the tasks
that a receptionist would perform, such as looking up office numbers
and providing directions.
• We could station the robot in a public space in order to
maximize the number of interactions with humans. In addition, the
robot could be located behind a desk, which provides some security
for the hardware.
In
order to make the robot a compelling presence, we elected to make
it human-like in its interactions. The Drama group helped to imbue
the robot with human characteristics by giving him a name, a personality,
a back-story, and several storylines that unfold over time. Events
in his life are related in “conversation” to visitors
who stop to chat with her. In addition, people can keep up with
Tank’s life online at http://www.roboceptionist.com.
Tank
and Valerie before him, enable a new form of storytelling. His entire
story, as well as character-related vocalizations and behaviors,
were scripted by students in the School of Drama. Complex storylines
interweave and evolve over a period of several months. For example,
he has a checkered past with organizations like NASA and the CIA
and he has a bad relationship with his father. Writers and designers
must deal with a character that has no vocal intonation, no natural
facial expressions, and no form of natural movement. Fundamental
assumptions regarding the creation of live storytelling had to be
reviewed; what works with humans often does not work with robots.
Tank’s
“head” is a flat-screen LCD monitor mounted on pan-tilt
unit. His “face” is a graphically
rendered 3D model. Her facial modeling and expressions were created
by members of the Drama group. Choosing a graphical, rather than
mechanical, face was a significant design decision. The flat-screen
face offers several advantages over a mechanical face:
• The graphical face is very expressive, with the ability
to move individual muscles to generate a wide range offacial expressions.
• A mechanical face is less reliable than a graphical one,
due to its many moving parts.
• Changes can easily be made to the graphical face. For example,
as part of one story, Valerie’s hairstyle changed. A physical
mechanism would be more difficult to modify. The greatest disadvantage
of the graphical face is that it lacks the physical embodiment of
a mechanical face. In particular, although the head rotates to face
visitors, it can be difficult to determine exactly where the robot
is looking.
Decisions
about the mode and structure of interactions were driven by a desire
to ensure that visitors do not become frustrated with the system
and are satisfied with the interactions.
1)
Storytelling: One of Tank’s primary interaction modes is storytelling.
Tank’s story is told in a very human way: subjectively and
evolving over time. His story is revealed through monologues, which
are styled as phone conversations with characters in her life. The
writers from Drama crafted storylines that evolved over the school
year. Storytelling was chosen in order to make the robot appear
more human-like and thus to allow visitors to interact easily with
him. Tank’s evolving life stories follow a well-known model—that
of the soap opera, or of the currently popular “reality”
show. By making Tank a compelling character, we hope to encourage
people to visit the robot repeatedly over time, in the same way
that they might eagerly tune in to each episode of Desperate
Housewives, As the World Turns or Survivor.
2)
Keyboard Input: Both speech and keyboard input modalities were considered
for visitors’ interactions with Tank. Speech is more natural
for most people, but keyboard input is easier to control and more
reliable than any general speech recognition systems, which typically
require either training for individual users or a drastic reduction
in the allowable vocabulary [10]. In addition, having visitors interact
vocally was unlikely to be robust due to the placement of the robot
ina busy hallway. Speech recognition systems are generally poor
at handling noise and echoes in the environment. While a head-worn
microphone can reduce the effect of ambient noise, we felt that
requiring visitors to first don a headset would detract significantly
from the overall experience.
3)
Chatbot: For handling natural language input, the decision to use
a pattern-matching “chatbot” rather than a grammatical
parser was based on the ease of adding information and being be
able to recognize novel sentences. Grammatical parsing would make
new sentences difficult to add, requiring additions to the dictionary
and to the grammar, and few existing systems can handle sentence
fragments well. The rule-based pattern-matching system that was
chosen.
Adapted from:
Designing Robots for Long-Term Social Interaction
for 2005 IEEE/RSJ International Conference on Intelligent Robots
and Systems
Rachel Gockley, Allison Bruce, Jodi Forlizzi, Marek Michalowski,
Anne Mundell,
Stephanie Rosenthal, Brennan Sellner, Reid Simmons, Kevin Snipes,
Alan C. Schultz †, and Jue Wang
Carnegie Mellon University, Pittsburgh PA
†Naval Research Laboratory, Washington DC
Copyright
property of the IEEE
[1]
A. Bruce, I. Nourbakhsh, and R. Simmons. The role of expressiveness
and attention in human-robot interaction. In IEEE Conference on
Robotics and Automation, 2002.
[2] R. Simmons et al. Grace: An autonomous robot for the AAAI robot
challenge. AAAI Magazine, 42:2:51–72, Summer 2003.
[3] C. Breazeal. and B. Scassellati. How to build robots that make
friends and influence people. In IEEE/RSJ International Conference
on Intelligent Robots and Systems (IROS1999) Kyonju, Korea., 1999.
[4] M. Scheeff. Experiences with Sparky: A social robot, 2000.
[5] Wolfram Burgard et al. Experiences with the interactive museum
tourguide
robot. Artificial Intelligence, 114(1-2):3–55, 1999.
[6] T. Willeke, C. Kunz, and I. Nourbakhsh. The history of the mobot
museum robot series: An evolutionary study, 2001.
[7] R. Siegwart et al. Robox at Expo.02: A large scale installation
of
personal robots. Robotics and Autonomous Systems, Special issue
on
Socially Interactive Robots, 42:203–222, 2003.
[8] M. Montemerlo, J. Pineau, N. Roy, S. Thrun, and V. Verma. Experiences
with a mobile robotic guide for the elderly, 2002.
[9] Takayuki Kanda, Takayki Hirano, Daniel Eaton, and Hiroshi Ishiguro.
Interactive robots as social partners and peer tutors for children:
A field
trial. Human-Computer Interaction, 19:61–84, 2004.
[10] R. Cole, J. Mariani, H. Uszkoreit, A. Zaenen, and V. Zue. Survey
of the
state of the art in human language technology, 1995.
|