CN112364853B - Robot task execution method based on knowledge base and PDDL semantic design - Google Patents
Robot task execution method based on knowledge base and PDDL semantic design Download PDFInfo
- Publication number
- CN112364853B CN112364853B CN202110044134.3A CN202110044134A CN112364853B CN 112364853 B CN112364853 B CN 112364853B CN 202110044134 A CN202110044134 A CN 202110044134A CN 112364853 B CN112364853 B CN 112364853B
- Authority
- CN
- China
- Prior art keywords
- knowledge
- task
- robot
- action
- knowledge base
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
- G06N5/042—Backward inferencing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/66—Analysis of geometric attributes of image moments or centre of gravity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Geometry (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Manipulator (AREA)
Abstract
The invention discloses a robot task execution method based on knowledge base and PDDL semantic design, and belongs to the field of knowledge base and robot autonomous decision making. The method comprises the following steps: the camera senses scene objects globally; the image visual analysis obtains the object state; using a graph database to express a knowledge base; generating a PDDL domain file and a problem file; and executing the plan and returning to the robot for cooperative execution. The method can realize the task execution by cooperating multiple robots under the condition that the semantic environment is known in an indoor scene, and has great significance for the autonomous decision execution of the multiple robots.
Description
Technical Field
The invention belongs to the field of knowledge base and autonomous decision making of robots, and particularly relates to a robot task execution method based on knowledge base and PDDL semantic design.
Background
The knowledge base is a leading-edge technology in the field of computers, is more applied to finance anti-fraud, knowledge question answering, searching and the like in the market, is rarely applied in the field of robots, and is currently applied to the leading-edge of the field in the implementation of knowledge-based task planning. The knowledge base is used for storing some common object usage, relationships among objects, methods for operating objects and the like in daily life to simulate human brain memory common knowledge, actions and task execution, and meanwhile, the knowledge reasoning technology is used for reasoning and cognizing unknown strange environments from the existing knowledge, so that autonomous learning and autonomous task planning are achieved, and the direction of future intellectualization of the robot is provided.
In the field of warehouse, the task execution is to arrange the task sequence, then schedule the robot to execute, and not process when an exception occurs.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a robot task execution method based on knowledge base and PDDL semantic design. The method can use the knowledge stored in the knowledge base and the autonomous task decomposition and planning of PDDL language, does not need the arrangement and setting of tasks, and can automatically process dynamic conditions and abnormal conditions in the execution process of the robot.
In order to realize the purpose, the invention adopts the following technical scheme: a robot task execution method based on knowledge base and PDDL semantic design comprises the following steps:
the method comprises the following steps: in an indoor scene, sensing an environmental object through a global camera, and completing object identification by using a semantic segmentation method;
step two: identifying the position coordinates of the objects in the step one by setting three-dimensional global coordinates, acquiring the position coordinates of the gravity center of the objects, and calculating the position relation between the objects; the position coordinates of the objects and the position relation among the objects form an object state; generating and storing current environment instantiation knowledge in Redis;
step three: expressing a knowledge base by using a graph database, wherein the knowledge base comprises robot task knowledge and robot knowledge; the robot task knowledge consists of a task target state and action knowledge, the action knowledge expresses the state change of an object after action execution by using a similar Petrenet network structure, and records the state change of input and output time of the action; the robot knowledge records registered robots, including robot components and actions which can be executed and action knowledge association;
step four: after receiving the tasks, the knowledge base matches the tasks to corresponding action knowledge, analyzes each obtained action, and converts the action into an action structure generating domain file required by a PDDL format; the domain file includes: assertion and action, fuzzy matching is carried out on corresponding tasks stored in the robot task knowledge from a knowledge base by using an NLP technology, and task target states and current environment instantiation knowledge in the robot task knowledge are combined to generate a problem file;
step five: decomposing and planning tasks to be executed by a method of searching in a global state space, reversely reasoning according to task target states and domain files in generated problem files to obtain a global execution sequence of the tasks, and scheduling corresponding robots to execute the tasks according to the global execution sequence by a knowledge base according to action knowledge during task execution; meanwhile, the robot senses the state of an environmental object in real time in the process of executing a task, and stores sensed environmental object information into Redis, wherein the environmental object information comprises an object position, an object state and a robot state;
step six: judging whether the object state is the same as the assertion in the domain file in real time, and continuing to execute the task when the object state is the same as the assertion; otherwise, repeating the fourth step to the fifth step until the task execution is finished.
Further, if the task is not matched from the knowledge base in the fourth step, corresponding task knowledge and the task target state are added into the robot task knowledge.
Compared with the prior art, the invention has the following beneficial effects: the robot task execution method based on knowledge base and PDDL semantic design can realize autonomous task planning execution of multiple robots in a warehouse environment through the knowledge base and PDDL planning language, and has good scientific research and engineering values. The invention adopts the storage medium of the knowledge base simulating human knowledge memory to memorize different scenes, thereby being suitable for different scenes; in addition, Redis is used as storage after instantiation of knowledge in a scene, so that time delay of information retrieval and processing is greatly reduced; based on the dynamic change of the scene and the abnormity of task execution, the robot can sense real-time planning in real time, and the intelligence of the method is improved.
Drawings
FIG. 1 is a flowchart of a robot task execution method based on knowledge base and PDDL semantic design.
Fig. 2 is a view showing an operation knowledge structure.
Detailed Description
The object of the invention is explained in further detail below with reference to the drawings.
As shown in fig. 1, which is a flowchart of a robot task execution method designed based on a knowledge base and PDDL semantics, the robot task execution method specifically includes the following steps:
the method comprises the following steps: in an indoor scene, sensing an environmental object through a global camera, and completing object identification by using a semantic segmentation method;
step two: identifying the position coordinates of the objects in the step one by setting three-dimensional global coordinates, acquiring the position coordinates of the gravity center of the objects, and calculating the position relation between the objects; the position coordinates of the objects and the position relation among the objects form an object state; generating current environment instantiation knowledge through the object state and storing the knowledge in Redis;
step three: expressing a knowledge base by using a graph database, wherein the knowledge base comprises robot task knowledge and robot knowledge; the robot task knowledge consists of a task target state and action knowledge, the action knowledge expresses the state change of an object after action execution by using a similar Petrenet network structure, and records the state change of input and output time of the action; the object function knowledge consists of recording the function knowledge of the object and the usage knowledge of the object; the robot knowledge records registered robots, including robot components and actions which can be executed and action knowledge association; the object knowledge stored in the knowledge base is in an abstract level, so that the current environment instantiation knowledge needs to be visually perceived by a camera, and meanwhile, the object state and position relation in a scene can change in real time in the task execution process, so that the current environment instantiation knowledge needs to be perceived and acquired in real time, and then the instantiated object state and position relation are stored in a memory to assist the generation and real-time planning of a next problem file. Because the invention is a knowledge base facing task planning, the design of knowledge takes task execution as a center, the knowledge is mutually independent, and simultaneously, the relation links can be mutually established: the robot and the task knowledge, the object function knowledge and the space knowledge, and the object knowledge and the task knowledge (action knowledge in the task) are linked with each other, and meanwhile, the knowledge classification is more definite without superposition, so that the knowledge acquisition and the inquiry in the future are facilitated.
Step four: after receiving the task, the knowledge base matches the task to corresponding action knowledge, analyzes each obtained action, converts the action into an action structure required by a PDDL format and generates a domain file, wherein the domain file comprises: assertions and actions, wherein an assertion is part of independent existence of a domain file and part of an action composition, and an assertion is a group of state judgment functions which are positioned at the beginning of the domain file and defined behind predictives, such as assertionsIndicating that it is judged whether the object ob is near the object sub, and returning true, where coincidence is presentTable demonstration classes, i.e. objects ob and sub are uncertain and can be arbitrary and can be replaced by real objects according to the environment; the action is composed of parameters, conditional assertion and effect assertion, wherein the parameters refer to an object involved in completing the action, the conditional assertion refers to a prerequisite condition of the state of the object required for completing the action, the effect assertion refers to the state judgment of the object after the action is completed, such as (move-to r 1A) action, the parameters are r1 and A, the conditional assertion is (canMove r1) to indicate that r1 has the moving capability, and the effect assertion is (near r 1A) to indicate that r1 is near A. Fuzzy matching is carried out on corresponding tasks stored in the robot task knowledge from a knowledge base by using an NLP technology, and task target states and current environment instantiation knowledge in the robot task knowledge are combined to generate a problem file; if "move 3 boxes on shelf a to shelf B", the move, put, move, etc. represent the target status of subject onobject in the knowledge base, and shelf B represents the position of the box, so the task target is understood as "subject onobject BlocksA tableB". Finally generate the questionA file. And if the task is not matched from the knowledge base, adding the corresponding task knowledge and the task target state into the robot task knowledge, and combining the task target state in the robot task knowledge and the current environment instantiation knowledge to generate a problem file. The existing task planning mode is basically to manually configure domain files and problem files, and the invention dynamically generates 2 files by using a knowledge base to store task target states and action knowledge, so that the task planning is more intelligent.
Step five: decomposing and planning tasks to be executed by a method of searching in a global state space, reversely reasoning according to task target states and domain files in generated problem files to obtain a global execution sequence of the tasks, and scheduling corresponding robots to execute the tasks according to the global execution sequence by a knowledge base according to action knowledge during task execution; meanwhile, the robot senses the state of an environmental object in real time in the process of executing a task, and stores sensed environmental object information into Redis, wherein the environmental object information comprises an object position, an object state and a robot state. The method for searching the global state space can be used for planning the task execution sequence more quickly and accurately, and the Redis state pool is introduced to realize quick query of the environment state and multi-task concurrent execution, so that the concurrent capability of the scheme is increased.
Step six: judging whether the object state is the same as the assertion of the action in real time, and continuing to execute the task when the object state is the same as the assertion; otherwise, repeating the fourth step to the fifth step until the task execution is finished. By judging the execution result of each step in real time and the real-time task planning decision, the abnormity or the error in the task execution process is automatically solved, and the task adaptability and the intelligence are improved.
Examples
Taking the example of multiple robots transporting 1 object O from location a to location B, as shown in fig. 2, there are 2 types of robots: the robot comprises a grabbing robot and a conveying robot, the grabbing robot r1 has a mechanical arm capable of grabbing objects but incapable of moving, the conveying robot r2 can convey the objects but does not have the mechanical arm, a knowledge base generates a domain file according to actions which can be executed by 2 types of robots after receiving tasks, and the fact that the domain file is asserted to be existed,,It is indicated that ob has a gripping capability,representing the object sub above ob,it is indicated that ob is empty,indicating that ob has a carrying capacity,indicating that object ob holds object sub; the actions are (move-to): parameters are as followsConditional assertion of havingThe effect is declared as(ii) a Action (pick-up): parameters are as followsConditional assertion of having、、The effect is declared as(ii) a Action (put-on): parameters are as follows(ii) a Conditional assertions are、Effect assertion is that. Generating a problem file according to the task target state and the visual perception object initial state, wherein the task execution sequence planned by the planning algorithm according to the domain file and the problem file is as follows:
(move-to r 1A) robot r1 moves to A;
(move-to r 2A) robot r2 moves to A;
(pick-up r 2O) robot r2 grabs object O;
(put-on r 2O r1) robot r2 puts object O on r 1;
(move-to r 2B) robot r2 moves to B;
(move-to r 1B) robot r1 moves to B;
(pick-up r 2O) robot r2 grabs object O;
(put-on r 2O B) robot r2 places object O in position B.
Compared with the prior scheme, the method of the invention is an attempt application of knowledge base technology in the field of robot autonomous task planning execution, and does not need to arrange task execution sequence in advance, thereby reducing the workload. On the other hand, the scheme can adapt to the dynamic change of the environment and process the abnormal condition in the task execution, thereby realizing more intelligent and multi-scenario application.
Claims (2)
1. A robot task execution method based on knowledge base and PDDL semantic design is characterized by comprising the following steps:
the method comprises the following steps: in an indoor scene, sensing an environmental object through a global camera, and completing object identification by using a semantic segmentation method;
step two: identifying the position coordinates of the objects in the step one by setting three-dimensional global coordinates, acquiring the position coordinates of the gravity center of the objects, and calculating the position relation between the objects; the position coordinates of the objects and the position relation among the objects form an object state; generating and storing current environment instantiation knowledge in Redis;
step three: expressing a knowledge base by using a graph database, wherein the knowledge base comprises robot task knowledge and robot knowledge; the robot task knowledge consists of a task target state and action knowledge, the action knowledge expresses the state change of an object after action execution by using a Petrenet network structure, and records the state change of input and output time of the action; the robot knowledge records registered robots, including robot components and actions which can be executed and action knowledge association;
step four: after receiving the tasks, the knowledge base matches the tasks to corresponding action knowledge, analyzes each obtained action, and converts the action into an action structure generating domain file required by a PDDL format; the domain file includes: assertion and action, fuzzy matching is carried out on corresponding tasks stored in the robot task knowledge from a knowledge base by using an NLP technology, and task target states and current environment instantiation knowledge in the robot task knowledge are combined to generate a problem file;
step five: decomposing and planning tasks to be executed by a method of searching in a global state space, reversely reasoning according to task target states and domain files in generated problem files to obtain a global execution sequence of the tasks, and scheduling corresponding robots to execute the tasks according to the global execution sequence by a knowledge base according to action knowledge during task execution; meanwhile, the robot senses the state of an environmental object in real time in the process of executing a task, and stores sensed environmental object information into Redis, wherein the environmental object information comprises an object position, an object state and a robot state;
step six: judging whether the object state is the same as the assertion in the domain file in real time, and continuing to execute the task when the object state is the same as the assertion; otherwise, repeating the fourth step to the fifth step until the task execution is finished.
2. The method for performing tasks by robots based on knowledge base and PDDL semantic design according to claim 1, wherein in step four, if no task is matched from the knowledge base, corresponding task knowledge and task goal state are added to the robot task knowledge.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110044134.3A CN112364853B (en) | 2021-01-13 | 2021-01-13 | Robot task execution method based on knowledge base and PDDL semantic design |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110044134.3A CN112364853B (en) | 2021-01-13 | 2021-01-13 | Robot task execution method based on knowledge base and PDDL semantic design |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112364853A CN112364853A (en) | 2021-02-12 |
CN112364853B true CN112364853B (en) | 2021-03-30 |
Family
ID=74534909
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110044134.3A Active CN112364853B (en) | 2021-01-13 | 2021-01-13 | Robot task execution method based on knowledge base and PDDL semantic design |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112364853B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113433941A (en) * | 2021-06-29 | 2021-09-24 | 之江实验室 | Multi-modal knowledge graph-based low-level robot task planning method |
CN113821648B (en) * | 2021-11-23 | 2022-04-08 | 中国科学院自动化研究所 | Robot task processing method and system based on ontology knowledge representation |
CN114580576B (en) * | 2022-05-05 | 2022-09-06 | 中国科学院自动化研究所 | Robot task planning method and device based on knowledge processing |
CN117950481A (en) * | 2022-10-17 | 2024-04-30 | 中国电信股份有限公司 | Interactive information generation method, device and system, electronic equipment and medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106779046A (en) * | 2016-12-12 | 2017-05-31 | 中山大学 | A kind of multiple agent high-order conviction cognition planner implementation method |
CN111098301A (en) * | 2019-12-20 | 2020-05-05 | 西南交通大学 | Control method of task type robot based on scene knowledge graph |
US20200282561A1 (en) * | 2019-03-08 | 2020-09-10 | Tata Consultancy Services Limited | Collaborative task execution by a robotic group using a distributed semantic knowledge base |
CN111737492A (en) * | 2020-06-23 | 2020-10-02 | 安徽大学 | Autonomous robot task planning method based on knowledge graph technology |
-
2021
- 2021-01-13 CN CN202110044134.3A patent/CN112364853B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106779046A (en) * | 2016-12-12 | 2017-05-31 | 中山大学 | A kind of multiple agent high-order conviction cognition planner implementation method |
US20200282561A1 (en) * | 2019-03-08 | 2020-09-10 | Tata Consultancy Services Limited | Collaborative task execution by a robotic group using a distributed semantic knowledge base |
CN111098301A (en) * | 2019-12-20 | 2020-05-05 | 西南交通大学 | Control method of task type robot based on scene knowledge graph |
CN111737492A (en) * | 2020-06-23 | 2020-10-02 | 安徽大学 | Autonomous robot task planning method based on knowledge graph technology |
Also Published As
Publication number | Publication date |
---|---|
CN112364853A (en) | 2021-02-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112364853B (en) | Robot task execution method based on knowledge base and PDDL semantic design | |
Cui et al. | Toward next-generation learned robot manipulation | |
Popović et al. | A strategy for grasping unknown objects based on co-planarity and colour information | |
CN113826051A (en) | Generating digital twins of interactions between solid system parts | |
Bekiroglu et al. | A probabilistic framework for task-oriented grasp stability assessment | |
Balakirsky | Ontology based action planning and verification for agile manufacturing | |
Angleraud et al. | Coordinating shared tasks in human-robot collaboration by commands | |
Ye et al. | A novel active object detection network based on historical scenes and movements | |
Tatiya et al. | Haptic knowledge transfer between heterogeneous robots using kernel manifold alignment | |
Lin et al. | Reduce: Reformulation of mixed integer programs using data from unsupervised clusters for learning efficient strategies | |
Chaturvedi et al. | Supporting complex real-time decision making through machine learning | |
Hsiao et al. | Object schemas for grounding language in a responsive robot | |
Wake et al. | Object affordance as a guide for grasp-type recognition | |
Poss et al. | Perceptionbased intelligent materialhandling in industrial logistics environments | |
Chan et al. | Recent advances in fuzzy qualitative reasoning | |
Chaturvedi | Acquiring implicit knowledge in a complex domain | |
Kisron et al. | Improved Performance of Trash Detection and Human Target Detection Systems using Robot Operating System (ROS) | |
Fichtl et al. | Bootstrapping relational affordances of object pairs using transfer | |
Antanas et al. | Relational affordance learning for task-dependent robot grasping | |
Cui et al. | Research on LFD System of Humanoid Dual-Arm Robot | |
Jacomini Prioli et al. | Human-robot interaction for extraction of robotic disassembly information | |
Lü et al. | Generation Approach of Human-Robot Cooperative Assembly Strategy Based on Transfer Learning | |
Rashed et al. | Robotic Grasping Based on Deep Learning: A Survey | |
Hawes et al. | The playmate system | |
Min et al. | Affordance learning and inference based on vision-speech association in human-robot interactions |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |