Events

First iCat Workshop at the High Tech Campus Eindhoven


On 27.03.2006 the first iCat Workshop took place at the High Tech Campus Eindhoven. Members of the iCat Community from France, Germany, Sweden and The Netherlands presented their current research with iCat.

The talks brought up for discussion topics such as how a robotic companion can be integrated in every day life, its possible tasks, which abilities are necessary and how they can be realized.

We would like to thank all participants for attending and for their contribution to the success of this event. Especially we want to thank the speakers for presenting their research results and their experiences with the iCat. The given insights were the basis for a lively discussion. In order to recover the main facts and to motivate further investigations you can find the slides from talks here for download.

iCat, Look Here!

Mark Hanheide and Frank Hegel, Bielefeld University, Germany
Non-verbal feedback cues are essential for an implicit grounding of communication acts. In human-human like face-to-face dialogs non-verbal cues like gaze direction, facial expressions or head movements are frequently used to ensure each other of their mutual attention, and to express understanding or conversational problems. Even agreement, sympathy, or disagreement with what the partner is talking about is often expressed by such non-verbal cues. In our talk we will sketch several research areas of our group that we aim to combine on the iCat platform in order to study different aspects of human machine interaction (HMI). The talk will cover previous works on multi-modal attention, results of comparative user studies regarding emotion readability, and our next steps towards establishing the iCat as interaction partner in the scenarios we are focusing on.

Experimenting with iCat in an eldercare environment

Marcel Heerink, Instituut voor Information Engineering, Hogeschool van Amsterdam, The Netherlands
The presentation will describe our experiences using the iCat in collecting user experience data on human-robot interaction in nursing homes for the elderly. This study image focuses on examining the influence of perceived social abilities of a robot on user's attitude towards and acceptance of the robot. Learnings from two experiments will be used to develop guidelines to support human-robot user studies with elderly users, in particular setting up experiments in an eldercare institution. Results show that participants who were confronted with the more socially communicative version of the robot felt more comfortable and were more expressive in communicating with it. This suggests that he more socially communicative condition would be more likely to be accepted as a conversational partner. However, the findings did not show a significant correlation between perceived social abilities and technology acceptance.

On looking and learning for companion robots

Pieter Jonker, Delft Technical University, The Netherlands
One of the main problems in society in the next decade is the rapidly growing number of elder people. With growing age, health problems tend to pile up, and already 50% of the health expenses go to the factor labor for the care of elderly people. However, it is forecasted that the care that needs to be given cannot possibly be fulfilled by the [wo]man power that is available in near future.
To relieve this pressure, domotica solutions, e.g. smart apartments, smart household equipment and alarm systems, can be used to leave more time for the care helpers for human to human interaction. To help avoiding loneliness, human too human interaction can also be stimulated by using versatile communication systems, e.g. based on TV-sets with set top boxes, DSL and always on-line video connections (easy MSN for the elderly). For all this, proper human-machine interfacing is crucial, and companion robots such as the iCat, PaPeRo, AIBO,... can play a very important role. Especially for people with dementia, companion robots need to be trustworthy and robust interfaces to a known outside world of familiar faces.
We research systems that can interpret scenes and intentions. In this presentation we will present a real-time stereo Smart-Camera that can be used to make real-time interpretations of a room and its inhabitant, to support intelligent alarms such as for the detection of falling. Secondly, we are able to teach robots to behave in a certain way, which appears to be very robust in all kind of situations. This can be explained by the fact that the number of different states that can be distinguished in the learned behavior is in the order of millions, whereas human programmed behavior yields state spaces in the order of hundred at most.

Design and evaluation of a robotic TV-assistant

Bernt Meerbeek, Philips Research, The Netherlands
In this talk, I will present my work on the design and evaluation of a personality for the robotic user interface "iCat". An application was developed that helps users find a TV-programme that fits their interests. Questions that were addressed include: What personality do users prefer for the robotic TV-assistant? What level of control do they prefer? How do personality and the level of control relate to each other? Two experiments were conducted. The first demonstrated that it is possible to create synthetic personalities of the TV-assistant by applying various social cues. For the second experiment, four prototypes were developed by combining two personalities and two levels of user control. In the high control condition, a speech-based command-and-control interaction style was used, whereas the interaction style in the low control condition consisted of speech-based system-initiative natural language dialogue. The results demonstrated an interaction between the effects of personality and level of control on user preferences. Overall, the most preferred combination was an extravert and friendly personality with low user control. Additionally, we found that perceived level of control was influenced by personality. This suggests that personality can be used as a means to increase the amount of control that users perceive.

DenK and the iCat

Robbert-Jan Beun, Rogier van Eijk and Huub Pr�st, University of Utrecht, The Netherlands
In the nineties a long-term collaborative project on the development of a cooperative user interface was carried out at the image Universities of Eindhoven and Tilburg. The project - called DenK[1] - combined fundamental research in knowlegde representation, communication, natural language semantics and pragmatics and object oriented animation. Central was the idea that, from a user's point of view, a computer should ideally present itself as a cooperative 'electronic assistant' who is knowledgeable about the domain of the application and interacts in an intelligent and cooperative way with the user, using natural language and other modalities. Although there are many differences in both approaches, the iCat-platform and the DenK-model basically share some important ideas. Therefore, it is to be expected that essential knowledge from the DenK-project can be reused in modules that support iCat's behaviour. In our presentation we will explain the underlying principles of the DenK-project and show how the knowledge that resulted from the project may be used for iCat.
[1] DenK is an abbreviation of 'Dialoogvoering en Kennisopbouw', which roughly means 'Dialogue Management and Knowledge Acquisition'.

OPPR version 1.2

Dennis Taapken, Philips Research, The Netherlands
During this presentation an update of the Open Platform for Personal Robotics (OPPR) software is presented. Based on feedback obtained through the iCat community forum [1] performance improvements to the OPPR software have been made. Futhermore, a new feature to the OPPR system is presented: the virtual iCat. The virtual iCat is a graphical representation of the physical iCat that can be used to replace the physical iCat in the Animation Editor and Animation Module. The virtual iCat gives you the possiblity to develop an on screen iCat character and to develop animations without using the physical iCat.
[1] iCat community forum, http://www.hitech-projects.com/icat

The iCat in the JAST multimodal dialogue system

Mary Ellen Foster, Technische Universit�t M�nchen, Germany
We describe how the Philips iCat is used in the multimodal dialogue system being built as part of the JAST project ("Joint-Action Science and Technology"; http://www.jast-net.gr/). The goal of JAST as a whole is to investigate the cognitive and communicative aspects of jointly-acting agents, both human and artificial; the dialogue system is intended as a platform to integrate the project's empirical findings on cognition and dialogue with its work on autonomous robots.
In the JAST dialogue system, the user and a robot work jointly to assemble a Baufix construction toy, communicating through speech, gestures, and facial motions The robot consists of a pair of robot arms, mounted to resemble human arms, and a Philips iCat head. The iCat provides two forms of output: synthesised speech with coordinated lip movements and facial expressions, and the ability to gaze at the user or a relevant object in the common work space.

Distributed embodied ePartners in a ubiquitous computing environment

Mark Neerincx, TNO Human Factors/Delft University of Technology, The Netherlands
Living and working environments contain more and more networked information compilations and technical equipment. We envision a collection of distributed and connected personal electronic partners, ePartners, to act in such environments in order to support (distributed) human actors for specific tasks, like health care actions by diabetics, technology use by elderly, and disclosure of feelings by team-members during prolonged exploration missions in high demand situations. An ePartner has three important characteristics. First, it predicts human's personal momentary needs by on-line gathering and modelling of human, machine (technology), task and context information. Second, it attunes the interaction to these needs by (semi-)automatic tailoring of support, content and dialogue. Third, it establishes "natural or intuitive" Human-Machine communication by expressing and interpreting communicative acts based on a common reference. At TNO Human Factors and Delft University, we use the iCat to develop models and prototypes for effective and social communication of a person with his or her ePartner. Research questions centre on (1) the sensing and generation of affective expressions (face, voice), (2) the application of different communication and assistance styles like "motivational interviewing" and "cooperative anamnesis", and (3) the effects of the embodiment of an ePartner (e.g. compared to a virtual character).

URBI for iCat

Jean-Christophe Baillie, ENSTA/UEI Lab, France
URBI is a universal interface to control robots, both from the hardware and software standpoint. It is based on a powerful scriptlanguage that natively includes parallelism, event-based programming, motor trajectory control and many useful abstractions for robotics and AI. URBI already works with Aibo, HRP2, Webots and pioneer robots. We will present in details what is URBI and what are the benefits of URBI for iCat.

Creating dialogue and reasoning modules for the Dutch Companion

Bas Steunebrink, Christian Mol, Nieske Vergunst, Intelligent Systems Group at Utrecht University, The Netherlands
image The goal of the Dutch Companion project is to make a prototype of a sociable robot, specifically aiming at communication with elderly or truck drivers. For this project, we use the iCat as an experimentation platform. The contributions of Utrecht University to the project are a dialogue module and a reasoning module. For the dialogue module, we intend to use the DenK framework in combination with Discourse Representation Theory, with an extension for pragmatic utterances with modal verbs. At Utrecht University, an agent programming language has been developed called 3APL (An Abstract Agent Programming Language), which we use as the reasoning engine for the companion. We are currently extending 3APL with emotions, social reasoning and real-time reasoning in order to create a framework for the sociable capabilities of the companion.

Psychologically grounded Animation and Emotion-Based Control Architecture for iCat

Amandine Grizard, Christine Lisetti, and Marco Paleari, EURECOM, France
We are currently studying a cognitive-affective computational architecture for Affective Socially Intelligent Agents (ASIA). Our design for ASIA architecture is based on an existing psychological theory of emotions, and we propose to instantiate our ASIA architecture and test it on the iCat platform.
In this presentation, we will discuss how iCat facial expressions of emotions can be simulated in terms of what is known about the dynamic of facial expressions in humans, related to the emotion-based ASIA architecture. We will discuss some of the implications of our results, as well as propose wish list that our research group would like to suggest for the iCat platform.

i-Cat as a companion robot

Siska Fitrianie, Dragos Datcu, Alin Chitu, Leon Rothkrantz, Delft University of Technology, The Netherlands
There is a project running at the Man-Machine-Interaction Group, Delft University of Technology. The goal of the project is to design the i-CAT as a companion robot. At the input site there is a strong focus on the recognition of emotions from speech, facial expressions and text. The i-Cat is supposed to fuse and process the multimodal input and to represent it in a modality independent way. The i-Cat should be able to extract features from the environment and interaction to be aware of the context, so the data processing should be context sensitive. Next, the Dialogue Management module activates most probable scripts for interpretation of the input data. Finally appropriate actions are displayed and generated by the fission module. The output is supposed to be multimodal.
Currently the focus of the research is on the input part of the system. The recognition of emotions from facial expressions is based on algorithms from Viola Jones, combined with Support Vector Machines as classifier. In a similar way we extract emotion from speech. The text processing is based on natural language processing. We use a list of 41 emotional expressions and are able to scale them in a two dimensional space using Distance measures extracted from WordNet and Multidimensional scaling techniques. Possible applications are supervising robot, entertainment robot, tutor robot or help desk.

Discussion

The day ended with a lively discussion about the addressed topics.
Last Modified: 13 November 2006 10:06