Sensors & Actuators in Cyber-physical Environments

In a cyber-physical environment (CPE), the physical interaction of users with artifacts in the environment is considered a vital part of the behavior that needs to be understood as a whole, in order to overcome the boundaries between human-machine interaction and interaction within the physical world. In order to observe these interactions, the CPE utilizes either the sensors contained within these artifacts (if they are part of CPS), or sensors that are available in the surroundings. This is different from previous paradigm, in which the user was explicitly interacting with input devices such as touch screens, microphones, or gesture recognition devices. In MADMACS, sensors (which may be the same hardware, plus some additional devices) capture much more than the explicit input in order to get a better situation understanding and awareness.

In the same way, until now, output in the context of human-machine dialogue was mostly focused on devices that present information, e.g. screens or speaker. To respect the rich accessories in a CPE and to support all types of common interaction, the actions that can be performed within the CPE using actuators must be considered as an integral part of the output rendering. This means that users should be able to refer to and operate for example a workbench, an elevator, lights or air conditioning using CPE dialogue.

Our task is to extend the existing dialogue i/o model through the addition of sensor devices (devices which produce sensor data, but do not necessarily report final input acts) and actuator devices (devices which perform actions, but do not necessarily present information) to the model. By providing specialized models, this data allows semantic interpretation.