In the world of robotics, mission planning is the transformation of a given plan into a set of operations that the robot can understand that allows it to successfully implement the plan. This is accomplished by planning sensor-based operations and overall motion while avoiding collisions, as well as precise planning of motion and velocity in places that require the cooperation of multiple parts of the robot (2).
Programming a robot is not an easy task, especially in the case of tasks that require complex operations in 3D workspaces, It coordinates these processes with information from sensors. Even for relatively simple tasks such as those performed by industrial robots in production today, the cost of its programming can be equal to the price of the robot itself, so it is natural to give priority to methods of simplifying robot programming (3).
One way to solve this problem is by teaching the robot what to do by giving it only the desired or final result.
Giving the robot a command to “Put the object on the table” is easier for the user than determining the sequence of movements of the axes and joints required at each moment to achieve this result. to perform a specific task somewhere, The robot must have information about the geometry of the space, the design of the robot, and where it is located in the work environment.
The robot must also know the kinematic and dynamic equations of the axes and joints, along with the limitations of their applications such as the permissible angle of movement and other obstacles that may interfere with freedom of movement and other important information related to the implementation of the task. In addition, the robot must be equipped with touch sensors and vision sensors, which enables it to record 2D images of the robot’s workplace, which enables the robot to recognize models while planning the desired operation, and at the same time update its workplace model in case of changes (eg: introduction of new models or different obstacles), It transmits the information obtained to the current workplace model (that is, the robot still has a realistic “imagination” of the workplace). This process usually involves reconstructing a 3D workspace based on 2D images obtained from the vision sensors (3).
The robot’s knowledge with touch and vision sensors enables it to:
process and analyze workplace images, as well as create their (mostly 3D) engineering model.
Understand the task assigned by the user in natural language (eg English).
Perform this task: by generating rudiments (a finite sequence of movement of axes and joints over time) this task is achieved.
Robots capable of performing the above functions are called the third generation of robots (3).
How can a robot make decisions about moving axes if only the desired result is given?
Robots planning problems can be solved by TAMP: Task and Motion Planning by planning the necessary tasks (what object should I hold?) and integrated motion planning to accomplish this task (how can I hold this object?).
The field of TAMP arose from a combination of artificial intelligence methods for task planning with robotic methods for movement planning, TAMP methods focus on solving multi-use and continuous automated planning problems in random and unstructured environments (4).
Many TAMP approaches rely on so-called “logical search” to plan tasks with sample-based motion planning.
There are other methods such as relying on the estimation of areas of movement and the use of methods to satisfy constraints. Another modern approach to TAMP is LGP Logic Geometric Programming, which is an optimization-based approach, Logic imposes a structure of active constraints on a non-linear program, which has been shown to be able to integrate physical reasoning with TAMP (4).
1- Robotics: Manipulation Planning [Internet]. Robotics.tu-berlin.de. 2020 [cited 2 July 2021]. Available from: Here
2- Azar A, Vaidyanathan S. Handbook of Research on Advanced Intelligent Control Engineering and Automation [Internet]. US: IGI Global; 2015 [cited 2 July 2021].794 p. Available from: Here
3- Morecki A, Knapczyk J. Basics of Robotics – Theory and Components of Manipulators and Robots. 1st ed. Vienna: Springer; 1999. 580 p.Available from: Here
4- Driess D, Oguz O, Ha J, Toussaint M. Deep Visual Heuristics: Learning Feasibility of Mixed-Integer Programs for Manipulation Planning. 2020 IEEE International Conference on Robotics and Automation (ICRA) [Internet]. Paris: IEEE; 2020 [cited 2 July 2021]. p. 3. Available from: Here