Base des structures de recherche Inria
robotique interactive et référencée capteurs
RAINBOW (SR0814ZR) → RAINBOW → VIRTUS (SR0926VR)
Statut:
Décision signée
Responsable :
Paolo Robuffo Giordano
Mots-clés de "A - Thèmes de recherche en Sciences du numérique - 2023" :
A5.1.2. Evaluation des systèmes interactifs
, A5.1.3. Interfaces haptiques
, A5.1.7. Interfaces multimodales
, A5.1.9. Analyses perceptives et études utilisateurs
, A5.4.4. Reconstructions 3D et spatio-temporelles
, A5.4.6. Localisation d'objets
, A5.4.7. Asservissement visuel
, A5.6. Réalité virtuelle, réalité augmentée
, A5.6.1. Réalité virtuelle
, A5.6.2. Réalité augmentée
, A5.6.3. Simulation et incarnation d'avatars
, A5.6.4. Retours et interfaces multisensorielles
, A5.9.2. Estimation, modélisation
, A5.10.2. Perception
, A5.10.3. Planification
, A5.10.4. Action
, A5.10.5. Interactions (avec l'environnement, des humains, d'autres robots
, A5.10.6. Flottes de robots
, A5.10.7. Apprentissage
, A6.4.1. Contrôle déterministe
, A6.4.3. observabilité et contrôlabilité
, A6.4.4. Stabilité et stabilisation
, A6.4.5. Contrôle de paramètres de systèmes
, A6.4.6. Contrôle optimal
, A9.5. Robotique
, A9.7. Algorithmique de l'intelligence artificielle
, A9.9. IA distribuée, multi-agents
Mots-clés de "B - Autres sciences et domaines d'application - 2023" :
B2.4.3. Chirurgie
, B2.5. Handicap et assistances à la personne
, B2.5.1. Handicaps sensori-moteurs
, B2.5.2. Handicaps cognitifs
, B2.5.3. Assistance aux personnes agées
, B5.1. Usine du futur
, B5.6. Systèmes robotiques
, B8.1.2. Réseaux de capteurs
, B8.4. Sécurité et secours aux personnes
Domaine :
Perception, Cognition, Interaction
Thème :
Robotique et environnements intelligents
Période :
01/06/2018 ->
31/12/2026
Dates d'évaluation :
12/01/2022
Etablissement(s) de rattachement :
CNRS, INSA RENNES, U. RENNES
Laboratoire(s) partenaire(s) :
IRISA (UMR6074)
CRI :
Centre Inria de l'Université de Rennes
Localisation :
Centre Inria de l'Université de Rennes
Code structure Inria :
031126-1
Numéro RNSR :
201822637G
N° de structure Inria:
SR0842HR
La vision à long terme de l'équipe Rainbow est de développer la prochaine génération de robots à capteurs capables de naviguer et/ou d'interagir dans des environnements complexes non structurés avec des utilisateurs humains. Il est clair que le mot "ensemble" peut avoir des significations très différentes selon le contexte particulier : par exemple, il peut faire référence à la simple coexistence (les robots et les humains partagent un espace tout en exécutant des tâches indépendantes), à la connaissance de l'homme (les robots doivent connaître l'état et les intentions de l'homme pour ajuster correctement leurs actions) ou à la coopération réelle (les robots et les humains exécutent une tâche commune et doivent coordonner leurs actions). Dans ce contexte général, les activités de Rainbow seront particulièrement axées sur le cas de la coopération (partagée) entre les robots et les humains en poursuivant la vision suivante : d'une part, doter les robots d'un large degré d'autonomie pour leur permettre de fonctionner efficacement dans des environnements non triviaux (par exemple, en dehors de paramètres d'usine complètement définis). D'autre part, inclure les utilisateurs humains dans la boucle pour leur permettre de contrôler (partiellement et bilatéralement) certains aspects du comportement global du robot. Nous prévoyons de relever ces défis d'un point de vue méthodologique, algorithmique et applicatif.
Les principaux axes de recherche sur lesquels s'articulera l'équipe Rainbow sont les suivants : trois axes de soutien (éstimation d'état optimale avec incertitudes ; commande avancée referencée capteurs ; haptique pour les applications robotiques) qui visent à développer des méthodes, des algorithmes et des technologies pour réaliser le thème central de la commande partagée de systèmes robotiques complexes.
Hereafter, a summary description of the four axes of research in Rainbow.
Optimal and Uncertainty-Aware Sensing
Future robots will need to have a large degree of autonomy for, e.g., interpreting the sensory data for accurate estimation of the robot and world state (which can possibly include the human users), and for devising motion plans able to take into account many constraints (actuation, sensor limitations, environment), including also the state estimation accuracy (i.e., how well the robot/environment state can be reconstructed from the sensed data). In this context, we will be particularly interested in (i) devising trajectory optimization strategies able to maximize some norm of the information gain gathered along the trajectory (and with the available sensors). This can be seen as an instance of Active Sensing, with the main focus on online/reactive trajectory optimization strategies able to take into account several requirements/constraints (sensing/actuation limitations, noise characteristics). We will also be interested in the coupling between optimal sensing and concurrent execution of additional tasks (e.g., navigation, manipulation). (ii) Formal methods for guaranteeing the accuracy of localization/state estimation in mobile robotics, mainly exploiting tools from interval analysis. The interest in these methods is their ability to provide possibly conservative but guaranteed accuracy bounds on the best accuracy one can obtain with the given robot/sensor pair, and can thus be used for planning purposes of for system design (choice of the best sensor suite for a given robot/task). (iii) Localization/tracking of objects with poor/unknown or deformable shape, which will be of paramount importance for allowing robots to estimate the state of "complex objects" (e.g., human tissues in medical robotics, elastic materials in manipulation) for controlling its pose/interaction with the objects of interest.
Advanced Sensor-based Control
One of the main competences of the previous Lagadic team has been, generally speaking, the topic of sensor-based control, i.e., how to exploit (typically onboard) sensors for controlling the motion of fixed/ground robots. The main emphasis has been in devising ways to directly couple the robot motion with the sensor outputs in order to invert this mapping for driving the robots towards a configuration specified as a desired sensor reading (thus, directly in sensor space). This general idea has been applied to very different contexts: mainly standard vision (from which the Visual Servoing keyword), but also audio, ultrasound imaging, and RGB-D.
Use of sensors for controlling the robot motion will also clearly be a central topic of the Rainbow team too, since the use of (especially onboard) sensing is a main characteristics of any future robotics application (which should typically operate in unstructured environments, and thus mainly rely on its own ability to sense the world). We then naturally aim at making the best out of our experience in sensor-based control for proposing new advanced ways of exploiting sensed data for, roughly speaking, controlling the motion of a robot. In this respect, we plan to work on the following topics: (i) “direct/dense methods” which try to directly exploit the raw sensory data in computing the control law for positioning/navigation tasks. The advantages of these methods is the need for little data pre-processing which can minimize feature extraction errors and, in general, improve the overall robustness/accuracy (since all the available data is used by the motion controller); (ii) sensor-based interaction with objects of unknown/deformable shapes, for gaining the ability to manipulate, e.g., flexible objects from the acquired sensed data (e.g., controlling online a needle being inserted in a flexible tissue); (iii) sensor-based model predictive control, by developing online/reactive trajectory optimization methods able to plan feasible tra- jectories for robots subjects to sensing/actuation constraints with the possibility of (onboard) sensing for continuously replanning (over some future time horizon) the optimal trajectory. These methods will play an important role when dealing with complex robots affected by complex sensing/actuation constraints, for which pure reactive strategies are not effective. Furthermore, the coupling with the aforementioned optimal sensing will also be considered; (iv) multi-robot decentralised estimation and control, with the aim of devising again sensor-based strategies for groups of multiple robots needing to maintain a formation or perform navigation/manipulation tasks. Here, the challenges come from the need of devising “simple” decentralized and scalable control strategies under the presence of complex sensing constraints (e.g., when using onboard cameras, limited fov, occlusions). Also, the need of locally estimating global quantities (e.g., common frame of reference, global property of the formation such as connectivity or rigidity) will also be a line of active research.
Haptics for Robotics Applications
In the envisaged shared cooperation between human users and robots, the typical sensory channel (besides vision) exploited to inform the human users is most often the force/kinesthetic one (in general, the sense of touch and of applied forces to the human hand or limbs). Therefore, a part of our activities will be devoted to study and advance the use of haptic cueing algorithms and interfaces for providing a feedback to the users during the execution of some shared task. We will consider: (i) multi-modal haptic cueing for general teleoperation applications, by studying how to convey information through the kines- thetic and cutaneous channels. Indeed, most haptic-enabled applications typically only involve kinesthetic cues, e.g., the forces/torques that can be felt by grasping a force-feedback joystick/device. These cues are very informative about, e.g., preferred/forbidden motion directions, but are also inherently limited in their resolution since the kinesthetic channel can easily become overloaded (when too much information is compressed in a single cue). In recent years, the arise of novel cutaneous devices able to, e.g., pro- vide vibro-tactile feedback on the fingertips or skin, has proven to be a viable solution to complement the classical kinesthetic channel. We will then study how to combine these two sensory modalities for different prototypical application scenarios, e.g., 6-dof teleoperation of manipulator arms, virtual fixtures approaches, and remote manipulation of (possibly deformable) objects; (ii) in the particular context of medical robotics, we plan to address the problem of providing haptic cues for typical medical robotics tasks, such as semi-autonomous needle insertion and robot surgery by exploring the use of kinesthetic feedback for rendering the mechanical properties of the tissues, and vibrotactile feedback for providing with guiding information about pre-planned paths (with the aim of increasing the usability/acceptability of this technology in the medical domain); (iii) finally, in the context of multi-robot control we would like to explore how to use the haptic channel for providing information about the status of multiple robots executing a navigation or manipulation task. In this case, the problem is (even more) how to map (or compress) information about many robots into a few haptic cues. We plan to use specialized devices, such as actuated exoskeleton gloves able to provide cues to each fingertip of a human hand, or to resort to “compression” methods inspired by the hand postural synergies for providing coordinated cues representative of a few (but complex) motions of the multi-robot group, e.g., coordinated motions (translations/expansions/rotations) or collective grasping/transporting.
Shared Control of Complex Robotics Systems
This final and main research axis will exploit the methods, algorithms and technologies developed in the previous axes for realizing applications involving complex semi-autonomous robots operating in complex environments together with human users. The leitmotiv is to realize advanced shared control paradigms, which essentially aim at blending robot autonomy and user’s intervention in an optimal way for exploiting the best of both worlds (robot accuracy/sensing/mobility/strength and human’s cogni- tive capabilities). A common theme will be the issue of where to “draw the line” between robot autonomy and human intervention: obviously, there is no general answer, and any design choice will depend on the particular task at hand and/or on the technological/algorithmic possibilities of the robotic system under consideration.
A prototypical envisaged application, exploiting and combining the previous three research axes, is as follows: a complex robot (e.g., a two- arm system, a humanoid robot, a multi-UAV group) needs to operate in an environment exploiting its onboard sensors (in general, vision as the main exteroceptive one) and deal with many constraints (limited actuation, limited sensing, complex kinematics/dynamics, obstacle avoidance, interaction with difficult-to-model entities such as surrounding people, and so on). The robot must then possess a quite large autonomy for interpreting and exploiting the sensed data in order to estimate its own state and the environment one (“Optimal and Uncertainty-Aware Sensing” axis), and for planning its motion in order to fulfil the task (e.g., navigation, manipulation) by coping with all the robot/environment constraints. Therefore, advanced control methods able to exploit the sensory data at its most, and able to cope online with constraints in an optimal way (by, e.g., continuously replanning and predicting over a future time horizon) will be needed (“Advanced Sensor-based Control” axis), with a possible (and interesting) coupling with the sensing part for optimizing, at the same time, the state estimation process. Finally, a human operator will typically be in charge of providing high-level commands (e.g., where to go, what to look at, what to grasp and where) that will then be autonomously executed by the robot, with possible local modifications because of the various (local) constraints. At the same time, the operator will also receive online visual-force cues informative of, in general, how well her/his commands are executed and if the robot would prefer or suggest other plans (because of the local constraints that are not of the operator’s concern). This information will have to be visually and haptically rendered with an optimal combination of cues that will depend on the particular application (“Haptics for Robotics Applications” axis).
La position est calculée automatiquement avec les informations dont nous disposons. Si la position n'est pas juste, merci de fournir les coordonnées GPS à web-dgds@inria.fr