Planning Trajectories under Object Pose Uncertainty
Planning in robotics means coping with dynamic and uncertain worlds. Unstructured worlds are perceived by the agent through noisy sensors as laser, camera and so on. Especially, when the agent works "in contact" with the environment, uncertainty plays a crucial role. In such cases, the agent interacts actively with the surrounding environment and its actions affect future states of the environment itself. In other words, future observations are affected by earlier agent's actions. For this reason, we model the problem as an embedded system where the agent plans its policy with respect to the current environment's output (agent's input) and its action's outcome (agent's output) affects the future evolution of the system.
In robot grasping, there is typically uncertainty associated with the location of the object to be grasped. However, if the object is not in its expected location, then a robot equipped with tactile sensors, or torque sensors at finger joints, may gain information to help refine localisation knowledge from tactile contacts (or lack of such contacts) during the execution of a reach to grasp trajectory.