Group Interaction

The environments in which cyber-physical systems are deployed are typically not limited to a single person. As soon as multiple users interact with the same CPS or with different CPS that share the same physical environment, new challenges arise.

One of these challenges is to support multiparty communication and deal simultaneously with many independent and mobile users or groups of users with a joint intention in the same smart environment, instead of a single user in the traditional HCI paradigm. Understanding and augmenting multiparty interaction means that MADMACS must be able to track simultaneously speech, gesture and gaze microbehaviors of all group members with low latency and high accuracy in a non-intrusive way. In this project, we intend to consider true n:m relations between people and systems, i.e. n users can interact with m systems in arbitrary combinations.

Since some activities might require actions from CPS that are already performing tasks for another user, the question of turn-taking among users or groups of users arises. Some systems are more likely to be shared by multiple parties than others, e.g. certain machines in a factory or intelligent shelves in a shop. Such systems should exploit all available input, including speech, gestures, and eyegaze, in order to facilitate turn-taking. For devices where input might switch between users rapidly, human metaphors for assigning turns (e.g. hand gestures) should be utilized to anticipate which user is to operate the device next.

Moreover, the concept of user roles should be integrated into the platform. Roles can refer to different work domains or authorization levels (in the factory and retail context), driver and passenger (in the automotive context), adults and children (in the hotel and home context) etc. For instance, some home appliances might offer a more child-friendly interface with reduced functionality for increased safety for children than for adults.