CN115179301A - Behavior tree-based ground target object automatic grabbing control method and system - Google Patents

Behavior tree-based ground target object automatic grabbing control method and system Download PDF

Info

Publication number
CN115179301A
CN115179301A CN202211106744.2A CN202211106744A CN115179301A CN 115179301 A CN115179301 A CN 115179301A CN 202211106744 A CN202211106744 A CN 202211106744A CN 115179301 A CN115179301 A CN 115179301A
Authority
CN
China
Prior art keywords
node
target
control
target object
mechanical arm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211106744.2A
Other languages
Chinese (zh)
Other versions
CN115179301B (en
Inventor
黄开宏
肖军浩
汪雪
于清华
曾志文
卢惠民
郑志强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202211106744.2A priority Critical patent/CN115179301B/en
Publication of CN115179301A publication Critical patent/CN115179301A/en
Application granted granted Critical
Publication of CN115179301B publication Critical patent/CN115179301B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/25Integrating or interfacing systems involving database management systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0014Image feed-back for automatic industrial control, e.g. robot with camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Robotics (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Fuzzy Systems (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a behavior tree-based ground target object automatic grabbing control method and system, wherein the method comprises the following steps: s01, constructing a driving planning system, a target detection system and a mechanical arm control system which are respectively connected to a master controller; s02, respectively registering a driving planning node, a target detection node and a mechanical arm control node in a master controller, and constructing and forming a behavior tree according to the execution sequence of each node; s03, after starting control, the master controller controls each system to sequentially execute each node according to the behavior tree, and driving is controlled to a target position by executing the driving planning nodes; controlling target object detection by executing the target detection node; and controlling the action of the mechanical arm by executing the mechanical arm control node. The invention can realize the full-automatic searching and grabbing of the ground target object in a large range, and has the advantages of simple realization method, high automation degree, high control efficiency, strong flexibility and the like.

Description

Behavior tree-based ground target object automatic grabbing control method and system
Technical Field
The invention relates to the technical field of robot control, in particular to a ground target object automatic grabbing control method and system based on a behavior tree.
Background
Ground target object grasping system adopts centralized semi-automatic remote control mode among the prior art usually, places the robot in the assigned position earlier promptly, then the remote-controlled robot snatchs the operation, and whole control is carried out centralized control by an industrial computer, and this type of mode can have following problem:
1. the degree of automation is not high, and partial operation still relies on the manual execution, can't realize complete full-automatic, therefore only can be applicable to fixed position or object in the small scale and snatch, can not carry out automatic search on a large scale, and especially under the complex environment, the long-range large-scale search target object of robot and the accurate degree of difficulty of snatching are great for it is actually difficult to large-scale application.
2. The tasks to be executed for grabbing a ground target object are very complex, including driving planning, target detection, mechanical arm control and the like, if a centralized control mode is adopted, not only is the control flow complex, but also the control danger is easy to concentrate, once the centralized controller fails, the influence range is wide, the generated consequences are also serious, the reliability is low, the fault tolerance is poor, the connection of the centralized system is complex, the reliability of the system is further reduced, and therefore, when the system fails, the whole application must be checked, updated and the whole application must be updated, the system is not easy to expand, and when the system is complex, the centralized system becomes more and more difficult to maintain.
Although the real-time performance of the master-slave control mode system is good, the system expansibility is poor, and the maintenance is difficult. In other modes of controlling by adopting the PLC, although the PLC has high reliability and strong anti-interference capability, the PLC has high cost and complex use and operation, and a PLC software hardware system has poor compatibility and a closed structure, and can only perform logic control, and a complex control algorithm is difficult to implement and has poor flexibility, and the task logic cannot be adjusted in real time according to a target characteristic.
Some practitioners propose to adopt a traditional target detection and recognition algorithm to control the robot to automatically execute a search and grab task, but the method is still a centralized control method, has low system reliability and poor fault tolerance, is not easy to expand, cannot realize large-range full-automatic ground target search and grab, and is particularly not suitable for searching and grabbing target objects in dangerous scenes such as tunnel exploration and the like.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: aiming at the technical problems in the prior art, the invention provides a behavior tree-based ground target object automatic grabbing control method and system which are simple in implementation method, high in automation degree and control efficiency and strong in flexibility.
In order to solve the technical problems, the technical scheme provided by the invention is as follows:
a ground target object automatic grabbing control method based on a behavior tree comprises the following steps:
s01, constructing a driving planning system for executing driving planning control, a target detection system for executing target detection control and a mechanical arm control system for executing mechanical arm grabbing control, and respectively connecting the driving planning system, the target detection system and the mechanical arm control system to a master controller for centralized management;
s02, respectively registering a driving planning node, a target detection node and a mechanical arm control node in the master controller so as to be correspondingly applied to executing driving planning, target detection and mechanical arm control respectively, and constructing and forming a behavior tree according to the execution sequence of each node;
s03, after starting control, the master controller controls all systems to sequentially execute all nodes according to the behavior tree, wherein the driving planning system executes the driving planning nodes to control a controlled robot provided with a target detection system and a mechanical arm to drive to a target position; when the controlled robot is judged to be driven to the target position, the target detection node is executed through the target detection system so as to control the target object detection; and when the target detection system detects the required target object, executing the mechanical arm control node through the mechanical arm control system to control the mechanical arm to grab until the ending state of the behavior tree is executed, and finishing the automatic grabbing of the target object.
Further, the driving planning node comprises a navigation node for requesting the controlled robot to navigate the specified position, namely inputting the GPS coordinate of the target position;
the target detection nodes comprise a target rough identification node and a target fine positioning node, the target fine positioning node is used for requesting a front camera in the target detection system to photograph and returning a result of whether a target exists in front, and if yes, the central coordinates of the target object are stored in a first array { centroids _ ptr }; the target fine positioning node is used for requesting a binocular camera in the target detection system to take a picture and returning a recognition result whether a target object exists in the current region, and if the target fine positioning node recognizes that the target object exists, storing recognized object position information, corresponding detection points and corresponding capture points in a second array { action _ queue _ ptr };
the mechanical arm control node comprises a mechanical arm folding node, a mechanical arm movement to fixed point node used for requesting the mechanical arm to move to a fixed point position, and a mechanical arm grabbing node used for requesting the mechanical arm to grab.
Further, the behavior tree further comprises a plurality of condition nodes for assisting the driving planning node, the target detection node and the mechanical arm control node in completing corresponding driving planning, target detection and mechanical arm grabbing, wherein the condition nodes comprise a database node for storing GPS target point information, a first judgment queue node for judging whether a queue storing the target GPS information is empty, an acquisition head GPS node for acquiring first data in the queue storing the target GPS information, and a removal head GPS node for removing the first data in the queue storing the target GPS information.
Further, the behavior tree further includes a control node, which is used to define a way of traversing child nodes of the current node, where the control node includes a sequential execution node, a selection node, and a keep-running node, the sequential execution node is used to define that child nodes under the current node are executed in a sequence from left to right, and the next node is executed only after the previous node is executed successfully, and if there is a child node that is executed unsuccessfully, the execution is failed; the selection node is used for defining that if the first child node returns failure, the next child node is executed, different strategies are tried until a feasible strategy is found, and if the last child node also returns failure, all child nodes are suspended and return failure; and the operation maintaining node is used for defining the continuous operation of the current node control behavior tree until the child node returns failure.
Further, in step S03, the controlling, by the master controller, each system to sequentially execute each node according to the behavior tree includes: and the master controller generates control instructions according to the execution sequence of each node in the behavior tree and distributes the control instructions to each system, the control instructions comprise information of actions to be executed, after each system receives the control instructions sent by the master controller, the action information to be executed, which is predefined by each system, is matched with the action information to be executed in the control instructions, and if the matching is successful, the system which is successfully matched starts to execute the corresponding node until the terminal state of the behavior tree is executed.
Further, the step S03 includes:
s301, a master controller acquires information of a plurality of GPS target points to be explored in a target scene map, extracts one GPS target point as a current target point, and executes a driving planning node through a driving planning system so as to control a controlled robot to move to the current target point;
s302, after the controlled robot is judged to reach the current target point, the master controller controls the target detection node to be executed through the target detection system so as to collect a front image and identify whether a target object exists in front of the target detection node, and if so, coordinate information of the identified target object is obtained;
s303, executing the driving planning node through the driving planning system by the master controller so as to control the controlled robot to move to the position of the currently identified target object;
s304, after the controlled robot is judged to reach the position of the target object, the master controller executes the mechanical arm control node through the mechanical arm control system to control the tail end of the mechanical arm to move to the position of a photographing point, then executes the target detection node through the target detection system to acquire an image and identify whether the target object exists in the current area, and if the target object exists, information of the target object and detection point and grabbing point information corresponding to the target object are acquired;
s305, the master controller executes the mechanical arm control nodes through the mechanical arm control system so as to control and execute grabbing of the target object according to the information of the detection points and the grabbing points.
Further, the step S301 includes:
acquiring target GPS information to be explored according to a target scene map, and storing the target GPS information in a third array { task _ info _ ptr } in a database node;
extracting a plurality of pieces of GPS point information preset in a scene from the third array { task _ info _ ptr } by using a queue separation operation to form a GPS information queue { remaining _ GPS }, and extracting a first GPS target point required to go to as a current target point by executing an acquisition first GPS node;
sending a control instruction for executing a navigation request to specify a position to each system through the master controller, wherein the control instruction carries position information of a current target point;
and after receiving the control instruction, the driving planning system controls the controlled robot carrying the target detection system and the mechanical arm to move to the current target point by executing the navigation node, wherein the behavior tree continuously checks the state information returned by the current node in the execution process, and controls to continue executing the node or execute the next node according to the returned state information until the controlled robot reaches the current target position.
Further, the step S302 includes:
when the controlled robot is judged to reach the target position, the master controller sends a request for taking pictures to the camera and identifying the target object to each system;
after receiving a control instruction sent by the master controller, the target detection system controls the front camera to take a picture and input the taken picture into the neural network by executing the target coarse identification node, detects and identifies whether a target object exists in the picture through the neural network, and returns an identification result, wherein if the target object exists, the central coordinate of the target object is stored in a first array { centroids _ ptr } and then returns the central coordinate to the master controller;
converting the center coordinates of the target object into grid coordinates by using a conversion grid coordinate node and storing the grid coordinates in a third array { task _ info _ ptr }, and extracting the grid coordinates of the target object from the third array { task _ info _ ptr } by using a queue separation operation to form an information queue { remaining _ grids };
in the step S303, a grid coordinate of a target object is extracted from the information queue { remaining _ grids } by executing an acquisition first grid coordinate node, and then a navigation node is executed by the driving planning system to control the controlled robot to go to a position where a current grid is located, wherein in the execution process, the behavior tree continuously checks state information returned by the current node, and controls to continue executing the node or execute a next node according to the returned state information until the controlled robot reaches the target grid position.
Further, in the step S304, after the controlled robot reaches the position of the target object, the robot arm control system executes the robot arm movement to the fixed point node to control the end of the robot arm to move to the photographing position;
executing a target fine positioning node through the target detection system, controlling a binocular camera to shoot, identifying an image based on a pre-trained neural network, judging whether a target object exists in a current region, calculating the number and position of targets and detection points and grabbing points corresponding to each target object if the target object exists, storing the detection points and the grabbing points in an array { action _ queue _ ptr }, and returning data to a master controller;
in the step S305, the robot arm is controlled to move to the detection point through the robot arm control system, and whether the target object meets the condition is detected, if not, the first group of data in the array { action _ queue _ ptr } is popped up and then moves to the next detection point, if yes, the grabbing operation is performed, and in the execution process, if the current object is detected not to meet the preset requirement, the information is returned to the master controller.
A system for implementing the behavior tree-based ground target object automatic grabbing control method comprises the following steps:
the driving planning system is used for controlling the controlled robot carrying the target detection system and the mechanical arm to drive to a target position through the driving planning node;
the target detection system is used for detecting a target object through the target detection node when the controlled robot runs to a target position;
the mechanical arm control system is used for controlling a mechanical arm to grab when a required target object is detected through the mechanical arm control node;
the master controller is used for managing the driving planning system, the target detection system and the mechanical arm control system in a centralized manner;
each system is provided with controllers with the same Docker container environment, an ROS2 robot operating system is installed in each Docker container, and the IP addresses of the controllers are located in the same network segment.
Compared with the prior art, the invention has the advantages that: the invention divides the target object grabbing task into driving planning, target detection and mechanical arm control by constructing three subsystems of a driving planning system, a target detection system and a mechanical arm control system, realizes distributed control, simultaneously carries out centralized management on each subsystem by a master controller based on a behavior tree, can fully combine the advantages of the distributed control and the behavior tree control, and forms a control mode of 'distributed control and centralized management', so that the ground target object can be automatically grabbed and controlled, the whole process does not need manual participation or remote control, the automation degree and the control efficiency are greatly improved, the large-scale search and grabbing can be conveniently and accurately realized, the task logic can be conveniently adjusted in real time according to the target characteristics based on the behavior tree, the flexibility and the application range of the system are effectively improved, meanwhile, the faults of each subsystem can not influence the whole situation based on the distributed control mode, and the mutual diagnosis, detection and protection among the subsystems can be realized, thereby having high reliability, high fault tolerance and high expansibility.
Drawings
Fig. 1 is a schematic flow chart illustrating an implementation of the method for controlling automatic grabbing of a ground target object based on a behavior tree according to the embodiment.
Fig. 2 is a control principle schematic diagram of the control system constructed in the present embodiment.
Fig. 3 is a schematic diagram illustrating the effect of the present invention on a planned driving route in a specific application embodiment.
Fig. 4 is a schematic flow chart of the present invention for implementing automatic gripping control of a ground target object in a specific application embodiment.
Fig. 5 is a schematic diagram of the construction principle of the behavior tree based control system in a specific application embodiment of the invention.
Fig. 6 is a schematic diagram of a corresponding relationship between a node and each subsystem in the total industrial control machine in this embodiment.
Detailed Description
The invention is further described below with reference to the drawings and specific preferred embodiments of the description, without thereby limiting the scope of protection of the invention.
As shown in fig. 1, the steps of the method for controlling automatic grabbing of a ground target object based on a behavior tree in this embodiment include:
s01, constructing a driving planning system for executing driving planning control, a target detection system for executing target detection control and a mechanical arm control system for executing mechanical arm grabbing control, and respectively connecting the driving planning system, the target detection system and the mechanical arm control system to a master controller for centralized management;
s02, respectively registering a driving planning node, a target detection node and a mechanical arm control node in a master controller, respectively and correspondingly executing the driving planning, the target detection and the mechanical arm control, and constructing and forming a behavior tree according to the execution sequence of each node;
s03, after starting control, the master controller controls the systems to sequentially execute all nodes according to the behavior tree, wherein the driving planning system executes driving planning nodes to control a controlled robot carrying a target detection system and a mechanical arm to drive to a target position; when the controlled robot is judged to run to the target position, a target detection node is executed through a target detection system to control target object detection; and when the target detection system detects the required target object, executing a mechanical arm control node through the mechanical arm control system to control the mechanical arm to grab until the ending state of the behavior tree is executed, and finishing the automatic grabbing of the target object.
In the embodiment, a driving planning system, a target detection system and a mechanical arm control system are constructed, a target object grabbing task is divided into a driving planning, a target detection and a mechanical arm control, distributed control is realized, meanwhile, a master controller performs centralized management on all subsystems based on a behavior tree, the advantages of the distributed control and the behavior tree control can be fully combined, and a control mode of 'distributed control and centralized management' is formed, so that full-automatic grabbing control of a ground target object can be realized, manual participation or remote control is not needed in the whole process, the automation degree and the control efficiency are greatly improved, large-range searching and grabbing can be conveniently and accurately realized, task logics can be conveniently adjusted in real time according to target characteristics based on the behavior tree, the flexibility and the application range of the system are effectively improved, meanwhile, faults of all subsystems cannot influence the whole situation, and mutual diagnosis, detection and protection can be performed among all subsystems, so that the system further has high reliability, high fault tolerance and high expansibility.
The behavior tree is a control flow established by applying a graphical modeling language, the embodiment realizes automatic grabbing control of a ground target object based on the behavior tree, the whole development process can be fast, a client and a server can be synchronously designed, the development period can be effectively shortened, when the system is actually operated, a user can remotely monitor which step the behavior tree is executed at present, if a problem occurs, a fault point can be timely positioned, so that on-site debugging is facilitated, after the development is completed, the task logic can be adjusted in real time according to target characteristics in the execution process, if a new link (a metal detection link) is added, the flexibility is very high.
As shown in fig. 2, the control system of the present embodiment is specifically divided into three subsystems: the system comprises a driving planning system, a target detection system and a mechanical arm control system, resources such as data, programs, peripheral equipment and the like can be shared by all subsystems, each subsystem is provided with an industrial personal computer, and a total control algorithm is operated on one total industrial personal computer (a total controller). The industrial personal computer of the driving planning system is responsible for positioning and navigating tasks, the industrial personal computer of the target detection system is responsible for target detection and identification, the industrial personal computer of the mechanical arm control system is responsible for controlling the mechanical arm, the clamping jaw, the moment, the metal detector and other parts to work, the main industrial personal computer manages the operation of each subsystem in a centralized mode, and a distributed control mode of 'decentralized control and centralized management' is realized through the division cooperation of a plurality of industrial personal computers.
Because various problems may occur due to different configurations of the environment of the industrial personal computers in communication among the systems, all the industrial personal computers are configured with the same Docker container environment, the ROS2 robot operating system is installed in the Docker container, and the IP addresses of the industrial personal computers are all located in the same network segment. Further considering the problem of communication delay of the distributed control system, a bottom layer communication mechanism DDS (Data Distribution Service) based on ROS2 is adopted, so that the master controller and each subsystem client can realize communication through the bottom layer communication mechanism DDS of the ROS2, the DDS is realized based on a UDP/IP network communication protocol, the network delay can be reduced to the greatest extent, and the application requirement of distributed real-time communication is met. If a new subsystem needs to be added, the new subsystem and other subsystems are only required to be in the same local area network, the same Docker environment is built, and ports are equipped, so that communication between the newly added subsystem and other systems can be achieved.
In the embodiment, three subsystems are used for realizing driving planning, target detection and mechanical arm sampling respectively, each subsystem is provided with an industrial personal computer, each industrial personal computer is configured with the same Docker container environment, and the IP addresses of the industrial personal computers are located in the same network segment, namely the distributed control system is configured with the same Docker environment. Because the operating environments of the industrial personal computer programs are consistent, the same Docker environment is configured, errors caused by inconsistent environments can be avoided, the communication of the control system is realized through a bottom layer communication protocol DDS of ROS2, the DDS is a distributed real-time communication middleware protocol, the DDS is realized based on a UDP network communication protocol, the UDP communication does not need handshaking, the data can be guaranteed to be distributed efficiently and flexibly in real time, and the application requirements of the distributed real-time communication can be met. Meanwhile, by adopting a modular structure, the system is easy to realize universalization and serialization, the expansibility of the system is good, and a behavior tree control mode is combined, so that the problems of full automation, target identification, reliability, fault tolerance, expansibility, instantaneity, flexibility and the like of the ground target object grabbing system can be solved.
In this embodiment, when constructing the behavior tree, the execution sequence of each node specifically is: the driving planning node plans a driving path of the controlled robot according to a target point pre-stored in a planning path queue and a scene map, and controls to start the target detection node and start the target detection node when the controlled robot reaches the target point; the target detection node acquires a target image and judges whether a target object exists, if so, the central coordinate of the target object is stored, and then the driving planning node is started to control the controlled robot to drive to a grid with the target object; after the controlled robot reaches the position of a target object, the tail end of a mechanical arm is controlled to move to a photographing position through a mechanical arm control node, then a target detection system is controlled to collect information, the target detection system identifies images based on a pre-trained neural network, judges whether the target object exists in the current area, and if yes, calculates target object information such as target quantity, target position and the like, and corresponding detection points and grabbing points of each target object, stores the target object information and the corresponding grabbing points, and returns the target object information to a master controller; and controlling the mechanical arm to move forwards to the detection point through the driving planning node, detecting whether the target object meets the condition, if not, moving to the next detection point, and if so, performing grabbing operation.
In this embodiment, a deep learning method is specifically adopted when the target detection system detects and identifies a target, a target object image is collected in advance to produce a data set, then a neural network model is trained, and after the robot is started to explore a ground target object and reaches a target position, the trained model is used to identify the target object detected in real time. Preferably, global features of the images are adopted for reasoning during target recognition, and compared with a traditional sliding window and suggestion box method, the background can be judged more accurately, so that a trained model still has a better effect under an unexpected input condition, and when a sample which is not in a training set appears during testing, a network can judge whether the model belongs to a target object according to the features of the sample, so that the generalization capability is better.
In the embodiment, when the neural network model is trained, the images shot by the camera are used for training, and the target object with any visual characteristics can be searched and grabbed based on deep learning by changing the true value of the training sample. Compared with the traditional method based on a sliding window, the method has the advantages of more accurate judgment on the background, high recognition speed, high precision and strong generalization capability, and can search and grab any target object with visual characteristics. The target object is specifically metal in this embodiment, and if other articles need to be grabbed, the method can be applied only by changing the training database.
As shown in fig. 3, the process of implementing the automatic grabbing control of the ground target object in step S03 of this embodiment includes:
s301, a master controller obtains information of a plurality of GPS target points required to be explored in a target scene map, extracts one GPS target point as a current target point, and executes a driving planning node through a driving planning system to control a controlled robot to go to the current target point;
s302, after the controlled robot is judged to reach the current target point, the master controller controls the target detection node to be executed through the target detection system so as to collect a front image and identify whether a target object exists in front of the target detection node, and if so, coordinate information of the identified target object is obtained;
s303, the master controller executes the driving planning node through a driving planning system so as to control the controlled robot to move to the position of the currently identified target object;
s304, when the controlled robot reaches the position of the target object, the master controller executes a mechanical arm control node through a mechanical arm control system to control the tail end of the mechanical arm to move to the position of a photographing point, then executes the target detection node through a target detection system to acquire an image and identify whether the target object exists in the current area, and if the target object exists, information of the target object and detection points and grabbing point information corresponding to the target object are acquired;
s305, the master controller executes the mechanical arm control node through a mechanical arm control system so as to control and execute the grabbing of the target object according to the information of the detection point and the grabbing point.
In a specific application embodiment, the automatic grabbing control of the ground target object based on the behavior tree is realized as follows:
firstly, storing some GPS target points in a planned path of a robot (comprising a trolley, a target detection system and a mechanical arm) in a queue { remainning _ GPS } of { task _ info _ ptr } according to a scene map, controlling the robot to advance according to data in the queue, controlling a front camera to take a picture when the robot reaches the target points in the queue, judging whether a target object exists in front or not by the front camera according to the taken picture, storing the central coordinate of the target object in an array { centroids _ ptr } if the target object exists, converting and storing the central coordinate into the queue { remainning _ grid }, and enabling the robot to go to a grid with the target object according to the { remainning _ grid }. The planned driving route obtained in the implementation is shown in fig. 4, where the position of "start" represents the starting position, and "target" represents the position of the target.
After the trolley moves to a grid of a target object, the tail end of the mechanical arm is controlled to move to a photographing position, then a binocular camera in the target detection system is controlled to photograph, an industrial personal computer in the target detection system identifies images based on a pre-trained neural network, whether the target object exists in a current area or not is judged, if yes, the number and the position of the target and a detection point and a grabbing point corresponding to each target object are calculated, data are stored in an array { action _ queue _ ptr }, and the data are returned to the general industrial personal computer;
a series of operations is then performed by the robot arm control system: and the mechanical arm goes to the detection point, a metal detector is used for detecting whether the target object meets the condition, if the condition does not meet, the mechanical arm goes to the next detection point after popping up the first group of data in the { action _ queue _ ptr }, and if the condition does not meet, the mechanical arm carries out grabbing operation. Further, after grabbing, measure the object quality through torque sensor, if the quality satisfies the requirement, place the object in grabbing the basket, if unsatisfied the requirement, abandon this object.
In the embodiment, a behavior tree is built in a master controller, wherein the behavior tree is a hierarchical node tree for controlling a robot decision flow and consists of nodes and edges. Because each subsystem has specific task requirements, the embodiment creates corresponding nodes for different task requirements, and registers self-defined nodes including navigation nodes (navigattopose) for requesting the controlled robot to navigate a specified position, namely inputting a GPS coordinate of a target position, based on an XML workflow for a driving planning system; aiming at a target detection system, registering a target coarse identification node (PerceptionFindTarget) and a target fine positioning node (PerceptionEstimateGrippeAction) according to task requirements, wherein the target fine positioning node is used for requesting a front camera in the target detection system to photograph and returning a recognition result of whether a target exists in front, and if the target exists, storing a central coordinate of a target object in a first array { centroids _ ptr }; the PerceptionEstimateGripper action node is used for requesting a binocular camera in the target detection system to take a picture and returning an identification result whether a target object exists in the current area or not, and if the object exists, storing the identified object position information, the corresponding detection point and the corresponding capture point in a second array { action _ queue _ ptr }; a robot folding node (ArmFold) for requesting a robot to fold, a robot movement to setpoint node (ArmMoveToFixedPoint) for requesting a robot to move to a setpoint position, and a robot grasping node (ArmProcessTarget) for requesting a robot to perform a grasping operation are registered for a task demand of the robot control system.
The action node is registered according to the task requirement, because the action tree needs to be executed according to the task flow, only the action node is registered and cannot complete the control flow, and with interaction between the robot and the environment, switching between the nodes is required, and only the action node and the internal node of the action tree are combined, the task requirement cannot be met, therefore, in this embodiment, a condition node is further registered according to the task requirement, for example, a database node (database receivepickingtaskinfo) is used for storing GPS target point information, a first judgment queue is used for judging whether a queue is empty or not (IsSharedPose 2 dvectremptyis), a first GPS node (getpos 2 dfromsvaredvector) is used for acquiring first data of the queue, and a first GPS node (remopose 2 dfromsveredvector) is used for removing the first data of the queue. That is, the nodes in the behavior tree include execution nodes and control nodes, the execution nodes include action nodes and condition nodes, and the control nodes are internal nodes of the behavior tree and are used for defining the mode of traversing the child nodes, such as Sequence (Sequence), alternative (Fallback), and the like.
After the action nodes and the condition nodes are successfully registered, different XML files can be obtained, then the XML files identified by the action nodes and the condition nodes are visually edited by using the Groot, the registered action nodes and the registered condition nodes are obtained according to the identification result, then the corresponding nodes are constructed according to the control flow (shown in the figures 3 and 4) and the task requirements by combining the internal nodes of the behavior tree, the behavior tree is constructed, and the control system constructed based on the behavior tree is specifically shown in the figure 5. In the embodiment, the tasks of the subsystems are encapsulated through the behavior tree, different task nodes are registered, and when the master controller requests the subsystems to execute the tasks, the master controller does not need to maintain a complex callback function, so that the whole control flow is realized by using a graphical modeling language.
As shown in fig. 5, the nodes in the behavior tree include:
(1) driving planning system
Navigation node (navigattopose): the vehicle is requested to navigate to a specified location, with the service name "navigator _ to _ position", and the target is entered to represent the GPS coordinates of the target location.
(2) Object detection system
Target coarse identification node (permissionfindtarget): and requesting a front camera to take a picture and returning whether a target exists in front, and if so, storing the center coordinate of the target object in an array { centroids _ ptr }.
Target fine positioning node (perceptionsestimatagripperaction): and requesting the binocular camera to take pictures and returning whether a target object exists in the current area, and if so, storing the number and the position of the objects, and the detection point and the grabbing point corresponding to each object in an array { action _ queue _ ptr }.
(3) Mechanical arm control system
Arm folding node (ArmFold): and requesting the mechanical arm to fold, wherein the service name is 'arm/move _ to _ fixed _ point', if the parameter { arm _ is _ busy } is True (True) to indicate that the mechanical arm is executing other tasks, waiting for the change of { arm _ is _ busy } into error (Flase), and then requesting the service.
Arm movement to fixed point node (ArmMoveToFixedPoint): requesting the mechanical arm to move to a fixed point, wherein the service name is 'arm/move _ to _ fixed _ point', armFold inherits to ArmMoveToFixedPoint, the service names are consistent, when the type =0, requesting the mechanical arm to fold, and when the type =1, requesting the mechanical arm to move to a photographing point.
Mechanical arm grabbing node (ArmProcessTarget): and requesting the mechanical arm to perform sampling operation, returning to the process _ result after the mechanical arm is finished, and if the process _ result is not equal to 4 (the message type agreed with the mechanical arm server), judging that the grabbing fails, and performing grabbing operation again by the mechanical arm.
(4) Other nodes
Sequence execution node (Sequence): the child nodes under the node need to be executed in the order from left to right, and only if one execution succeeds, the next can be executed. If one child node fails to execute, the whole Sequence execution fails.
Select node (Fallback): if the first child returns a FAILURE (FAILURE), the next child is executed, a different policy is tried until a feasible policy is found, if the last child also returns a FAILURE (FAILURE), all children will pause and the sequence will return FAILURE.
Keep running node (keepwunning untilfailure): the node control behavior tree runs until the child node returns FAILURE.
Database node (database receivepickingtaskinfo): the array { task _ info _ ptr } is mainly used for storing navigation information, and comprises { remainning _ gps } and { remainning _ grids }, wherein the array is a series of positioning target points preset according to an exploration scene, and the array is grid coordinates obtained by converting center coordinates of target objects returned by a front camera. This node is also added in the behavior tree major loop in order to update the target data in the array { task _ info _ ptr }.
First fetch queue node (Unpack GPSGoals): the queue { remaining _ GPS } is extracted from the array { task _ info _ ptr }, i.e., a preset series of GPS points for the scene.
First judging whether the queue is an empty node (IsSharedPose 2 DVectrEmpty): and judging whether the queue { remaining _ gps } is empty or not, and if so, ending the current cycle.
Acquiring a first GPS node (GetPose 2 DFromShaedVector): the first GPS point in the queue { remaining _ GPS } is obtained and stored in { GPS _ target }.
Remove first GPS node (RemovePose 2 DfromSharedVector): remove the first GPS point in the queue { remaining _ GPS }.
Grid coordinate nodes (putpoints introgrids): the central coordinates of the target object { centroids _ ptr } are converted into grid coordinates and stored in { task _ info _ ptr }.
Second fetch queue node (Unpack Grids): the queue { remaining _ grids } is extracted from the array { task _ info _ ptr }, i.e., the grid coordinates of a series of target objects detected by the forward camera.
Second determination whether the queue is empty (IsSharedGridVectorEmpty): and judging whether the queue { remaining _ grid } is empty or not, and if so, ending the current cycle.
Obtain the first grid coordinate node (GetGridFromSharedVector): the first grid coordinate in the queue { remaining _ grid } is obtained and stored in { grid _ target }.
Remove the first grid coordinate node (RemoveGridFromSharedVector): remove the first grid coordinate in queue { remaining _ grid }.
Get first target information node (getgripperadfromsharedqueue): and extracting the detection point and the grabbing point of the first target object from the { action _ queue _ ptr } and outputting the detection point and the grabbing point to the { target }.
Remove first target inode (removegripperadfromsharedqueue): the information about the first object in { action _ queue _ ptr } is removed, and the following objects are moved forward in turn, so that the second object becomes the first object.
Blackboard view node (blackboardheckint): checking whether the parameter { process _ result } is 4, if not 4, the node returns 'SUCCESS'; if the value of the parameter is equal to 4, then the child node ArmProcessTarget of the node is executed, and the node is used for providing two grabbing opportunities for the mechanical arm.
Referring to fig. 5, the behavior tree executes the executable behaviors starting from the root node, traversing from top to bottom and from left to right until the terminal state is reached, and the leaf nodes in the behavior tree are all executable behaviors: the leaf node will perform a specific operation, either a simple detection operation (e.g., blackboardchecklint in fig. 5) or a more complex operation (e.g., armProcessTarget in fig. 5), and the node will return status information (SUCCESS, FAILURE, RUNNING); forming branches are various types of utility nodes to control the traversal of the tree, and determining the next executed node according to a specific rule according to the state information returned by the child nodes: if the SUCCESS is returned, executing the next node; if returning to "RUNNING", continue to carry out this node; if "FAILURE" is returned, the behavior tree is terminated. The behavior tree further comprises a node for controlling tree traversal, such as IsSharedGridVectorEmpty, for judging whether the queue { remaining _ grid } is empty, if so, the current cycle is ended, and an outer cycle process is executed, namely, the next GPS target point is reached.
In step S03 of this embodiment, specifically, the controlling, by the master controller, each system to sequentially execute each task node according to the behavior tree includes: and after receiving the control instruction sent by the master controller, each system matches the action information required to be executed, which is predefined by each system, with the action information required to be executed in the control instruction, and if the matching is successful, the system which is successfully matched starts to execute the corresponding task node until the ending state of the behavior tree is executed.
In a specific application embodiment, the overall control flow is realized on the master controller according to the behavior tree programming, each subsystem and the master controller are in the same network segment, information interaction is performed through a communication mechanism (Topic, service, action) of the ROS2, the subsystem only needs to check whether a related service request is received, and if so, a corresponding task is executed. The corresponding relationship between each subsystem and the node is shown in fig. 6, and each subsystem (Arm Server, permission Server, and gateway Server) will check the received service request, and execute the corresponding operation after the service names are matched.
Based on the behavior tree, step S301 of this embodiment includes:
s311, acquiring target GPS information to be explored according to a target scene map, and storing the target GPS information in a third array { task _ info _ ptr } in a database node DatabaaseReceivedPickingTaskInfo;
s312, extracting a plurality of pieces of GPS point information preset in a scene from the third array { task _ info _ ptr } by using a queue separation operation to form a GPS information queue { remaining _ GPS }, and extracting a first GPS target point required to go to as a current target point by executing and acquiring a first GPS node GetPose2 DFromSharedVector;
s313, sending a control instruction for executing the navigation request to specify the position to each system through a master controller, wherein the control instruction carries the position information of the current target point;
and S314, after receiving the control instruction, the driving planning system controls the controlled robot carrying the target detection system and the mechanical arm to move to the current target point by executing the navigation node NavigateToPose, wherein the behavior tree continuously checks the state information returned by the current node in the execution process, and controls to continue to execute the node or execute the next node according to the returned state information until the controlled robot reaches the current target position.
Based on the behavior tree, step S302 in this embodiment specifically includes:
s321, when the controlled robot is judged to reach the target position, the master controller sends a request for taking a picture by the front camera and identifying the target object to each system;
s322, after receiving a control instruction sent by a master controller, a target detection system controls a front camera to take a picture and input the taken picture into a neural network by executing a target coarse identification node PerceptionFindTarget, detects and identifies whether a target object exists in the picture through the neural network, and returns an identification result, wherein if the target object exists, the central coordinate of the target object is stored in a first array { centroids _ ptr } and then returns the central coordinate to the master controller;
s323, converting the center coordinates of the target object into grid coordinates by using a conversion grid coordinate node PutPoint IntoGrids, storing the grid coordinates in a third array { task _ info _ ptr }, and extracting the grid coordinates of the target object from the third array { task _ info _ ptr } by using a queue separation operation to form an information queue { remainning _ grids };
in step S303, a first grid coordinate node getgridfrom sharedvector is executed to extract a grid coordinate of a target object from an information queue { remaining _ grids }, then a navigation node navigatotopos is executed by the driving planning system to control the controlled robot to move to a position where a current grid is located, wherein in the execution process, the behavior tree continuously checks state information returned by the current node, and controls to continue executing the node or execute a next node according to the returned state information until the controlled robot reaches the target grid position.
In step S304, after the controlled robot reaches the position of the target object, the robot control system executes the movement of the robot arm to the fixed point node ArmMoveToFixedPoint to control the end of the robot arm to move to the photographing position;
executing a target fine positioning node Perception EstimateGripPeraction through a target detection system, controlling a binocular camera to shoot, identifying an image based on a pre-trained neural network, judging whether a target object exists in a current area, calculating the number and position of targets and detection points and grabbing points corresponding to each target object if the target object exists, storing the target number and position and the detection points and grabbing points in an array { action _ queue _ ptr }, and returning data to a master controller.
In step S305, the robot arm is controlled by the robot arm control system to move to the detection point, and whether the target object meets the condition is detected, if not, the first group of data in the array { action _ queue _ ptr } is popped up and then moves to the next detection point, and if so, the capturing operation is performed, and in the execution process, if the current object is detected not to meet the preset requirement, the information is returned to the master controller.
In a specific application embodiment, a detailed control flow for realizing automatic grabbing of a ground target object based on a behavior tree is as follows:
firstly, according to a scene map to be explored, a robot exploration target point is set, the GPS target points are stored by an array { task _ info _ ptr } in a database node (database ReceivedPickingTaskInfo), and then a queue { remainning _ GPS } is extracted from the { task _ info _ ptr } by using a queue separating operation (Unpack GPSGoals), namely a scene preset series of GPS points. The method comprises the following steps of obtaining a first GPS node GetPose2 DFromShaededVector through a node, extracting a first GPS target point to be reached, and controlling the robot to reach the target point by utilizing a node navigation node NavigateToPose: the master controller requests action 'navigator _ to _ position', the industrial control machines of all the subsystems check received request names in real time, when the request names are matched with predefined services/actions, all the subsystems execute corresponding operations, and after the industrial control machines of the driving planning system check the 'navigator _ to _ position' request, the industrial control machines navigate and position the point according to the received GPS target point.
After the robot reaches the target point, controlling a front camera to photograph and return to the front to determine whether a target exists through a target coarse identification node PerceptionFindTarget: the general industrial control machine requests service of 'permission/find _ target', after the industrial control machine of the target detection system detects the service request, the control system takes a picture to the camera, inputs the taken picture into the neural network, detects and identifies whether a target object exists in the picture through the neural network, and stores the central coordinate of the target object in an array { centroids _ ptr } and returns the central coordinate to the general industrial control machine if the target object exists; if not, the return array is empty.
The center coordinates of the target object are converted into grid coordinates by using a conversion grid coordinate node (putpointsintrogrids), and stored in { task _ info _ ptr }. And then extracting a queue { remaining _ grids } from the queue separation operation (Unpack grids), namely grid coordinates of the target object. Extracting grid coordinates of a first target object by acquiring a head grid coordinate node GetGridFromSharedvector, controlling the robot to move to the grid by a navigation node NavigateToPose, continuously checking state information returned by the node by a behavior tree, continuously executing the node if returning to RUNNING, and executing the next node when returning to SUCCESS.
Then the mechanical arm moves to a fixed point node ArmMoveToFixedPoint (type = 1) to control the mechanical arm end to move to a photographing point: the master controller requests to act 'arm/move _ to _ fixed _ point', and after the industrial control machine of the mechanical arm control system checks the request, the industrial control machine controls the tail end of the mechanical arm to move to a photographing point.
After the target is in place, the target fine positioning node PerceptionEstimateGripper Action controls a binocular camera in the target detection system to take a picture: the general industrial control machine requests service 'permission/estimate _ grip _ action', after the industrial control machine of the target detection system detects the service request, the industrial control machine identifies the image based on a pre-trained neural network, judges whether a target object exists in the current area, calculates the number and the position of the target and a detection point and a grabbing point corresponding to each target object if the target object exists, stores the data in an array { action _ request _ ptr } and returns the data to the general industrial control machine; if not, the return array is empty.
And extracting a detection point and a grabbing point of the first target object from the array by acquiring a first target information node GetGripperActionFromSharedQueue, and outputting the detection point and the grabbing point to { target }.
The robot control system then performs a series of operations (ArmProcessTarget): the general industrial control machine requests action 'arm/process _ target', after the industrial control machine of the mechanical arm control system detects the request, the tail end of the mechanical arm is controlled to go to a detection point, after the mechanical arm is in place, the metal detector is controlled to detect whether a target object meets the condition or not, if yes, the grabbing operation is carried out, after the grabbing is completed, the torque sensor is controlled to measure the quality of the object, when the quality meets the requirement, the object is placed into a recovery basket, and if not, the object is discarded. In the execution process, if the object is detected to be not metal/not meeting the requirement of quality/failing to grab, the information is returned to the general industrial control machine to prepare for receiving the next instruction.
And after receiving the information that the detected object is not metal, the general industrial control computer pops up the first group of data in the { action _ queue _ ptr }, operates the mechanical arm again to grab the node ArmProcessTarget, and requests the mechanical arm to move to the next detection point. The general industrial control computer checks the BlackboardCheckInt through the blackboard to judge whether the grabbing fails, and if the grabbing fails once, the grabbing can be carried out again.
The invention realizes full-automatic ground target object grabbing based on the behavior tree, and the behavior tree is a graphical modeling language, after the client nodes of each subsystem are packaged, when the client requests service, only corresponding nodes need to be led out according to task requirements, so the system has high automation degree, high identification accuracy, high system reliability, strong fault tolerance and good expansibility, can quickly identify the target object, can meet the real-time communication requirement, is easy to control by using a visual interface, is easy to realize programming, can also adjust task logic in real time according to target characteristics in the execution process, and has high flexibility. Simultaneously, by adopting a modular structure, the control system is divided into three subsystems: the distributed control system is easy to realize universalization and serialization, the system is convenient to expand, the expandability is good, the calculated pressure is distributed to each subsystem, and the reliability and fault-tolerant capability of the system are effectively improved.
The foregoing is illustrative of the preferred embodiments of the present invention and is not to be construed as limiting the invention in any way. Although the present invention has been described with reference to the preferred embodiments, it is not intended to be limited thereto. Therefore, any simple modification, equivalent change and modification made to the above embodiments according to the technical spirit of the present invention should fall within the protection scope of the technical scheme of the present invention, unless the technical spirit of the present invention departs from the content of the technical scheme of the present invention.

Claims (10)

1. A ground target object automatic grabbing control method based on a behavior tree is characterized by comprising the following steps:
s01, constructing a driving planning system for executing driving planning control, a target detection system for executing target detection control and a mechanical arm control system for executing mechanical arm grabbing control, and respectively connecting the driving planning system, the target detection system and the mechanical arm control system to a master controller for centralized management;
s02, respectively registering a driving planning node, a target detection node and a mechanical arm control node in the master controller, respectively and correspondingly executing the driving planning, the target detection and the mechanical arm control, and constructing and forming a behavior tree according to the execution sequence of each node;
s03, after starting control, the master controller controls all systems to sequentially execute all nodes according to the behavior tree, wherein the driving planning system executes the driving planning nodes to control a controlled robot provided with a target detection system and a mechanical arm to drive to a target position; when the controlled robot is judged to be driven to the target position, the target detection node is executed through the target detection system so as to control the target object detection; and when the target detection system detects the required target object, executing the mechanical arm control node through the mechanical arm control system to control the mechanical arm to grab until the ending state of the behavior tree is executed, and finishing the automatic grabbing of the target object.
2. The behavior tree based ground target object automatic grabbing control method according to claim 1,
the driving planning nodes comprise navigation nodes for requesting the controlled robot to navigate a specified position, namely inputting a GPS coordinate of a target position;
the target detection nodes comprise a target rough identification node and a target fine positioning node, the target fine positioning node is used for requesting a front camera in the target detection system to photograph and returning a result of whether a target exists in front, and if yes, the central coordinates of the target object are stored in a first array { centroids _ ptr }; the target fine positioning node is used for requesting a binocular camera in the target detection system to take a picture and returning a recognition result whether a target object exists in the current region, and if the target fine positioning node recognizes that the target object exists, storing recognized object position information, corresponding detection points and corresponding capture points in a second array { action _ queue _ ptr };
the mechanical arm control nodes comprise mechanical arm folding nodes, mechanical arm movement to fixed point nodes used for requesting the mechanical arm to move to a fixed point position, and mechanical arm grabbing nodes used for requesting the mechanical arm to grab.
3. The method as claimed in claim 2, wherein the behavior tree further includes a plurality of condition nodes for assisting the driving planning node, the target detection node and the robot control node to complete corresponding driving planning, target detection and robot grasping, and the condition nodes include a database node for storing GPS target point information, a first determination queue node for determining whether the queue storing the target GPS information is empty, an acquisition head GPS node for acquiring first data in the queue storing the target GPS information, and a removal head GPS node for removing the first data in the queue storing the target GPS information.
4. The behavior tree-based ground target object automatic grabbing control method according to claim 3, wherein the behavior tree further comprises control nodes for defining a mode of traversing child nodes of a current node, the control nodes comprise a sequential execution node, a selection node and a keep-running node, the sequential execution node is used for defining that the child nodes under the current node are executed in a left-to-right sequence, the next node is executed after the previous node is executed successfully, and if one child node is executed unsuccessfully, the execution is failed; the selection node is used for defining that if the first child node returns failure, the next child node is executed, different strategies are tried until a feasible strategy is found, and if the last child node also returns failure, all child nodes are suspended and return failure; and the operation keeping node is used for defining the continuous operation of the current node control behavior tree until the child node returns failure.
5. The behavior tree-based ground target object automatic grabbing control method according to claim 1, wherein in the step S03, the step of controlling, by the master controller, the systems to sequentially execute the nodes according to the behavior tree includes: and the master controller generates control instructions according to the execution sequence of each node in the behavior tree and distributes the control instructions to the driving planning system, the target detection system and the mechanical arm control system, the control instructions comprise information of actions to be executed, after each system receives the control instructions sent by the master controller, the action information to be executed, which is predefined by each system, is matched with the action information to be executed in the control instructions, and if the matching is successful, the system which is successfully matched starts to execute the corresponding node until the ending state of the behavior tree is executed.
6. The behavioral tree-based ground target object automatic grasp control method according to any one of claims 1~5, wherein the step of S03 comprises:
s301, a master controller acquires information of a plurality of GPS target points to be explored in a target scene map, extracts one GPS target point as a current target point, and executes a driving planning node through a driving planning system so as to control a controlled robot to move to the current target point;
s302, when the controlled robot is judged to reach a current target point, the master controller controls the target detection node to be executed through the target detection system so as to collect a front image and identify whether a target object exists in front, and if so, coordinate information of the identified target object is obtained;
s303, executing the driving planning node through the driving planning system by the master controller so as to control the controlled robot to move to the position of the currently identified target object;
s304, after the controlled robot is judged to reach the position of the target object, the master controller executes the mechanical arm control node through the mechanical arm control system to control the tail end of the mechanical arm to move to the position of a photographing point, then executes the target detection node through the target detection system to acquire an image and identify whether the target object exists in the current area, and if the target object exists, information of the target object and detection point and grabbing point information corresponding to the target object are acquired;
s305, the master controller executes the mechanical arm control node through the mechanical arm control system so as to control and execute the grabbing of the target object according to the information of the detection point and the grabbing point.
7. The behavior tree based ground target object automatic grabbing control method according to claim 6, wherein the step S301 comprises:
acquiring target GPS information to be explored according to a target scene map, and storing the target GPS information in a third array { task _ info _ ptr } in a database node;
extracting a plurality of pieces of GPS point information preset in a scene from the third array { task _ info _ ptr } by using a queue separation operation to form a GPS information queue { remaining _ GPS }, and extracting a first GPS target point required to go to as a current target point by executing an acquisition first GPS node;
sending a control instruction for executing a navigation request to specify a position to each system through the master controller, wherein the control instruction carries the position information of the current target point;
and after receiving the control instruction, the driving planning system controls the controlled robot carrying the target detection system and the mechanical arm to move to the current target point by executing the navigation node, wherein the behavior tree continuously checks the state information returned by the current node in the execution process, and controls to continue executing the node or execute the next node according to the returned state information until the controlled robot reaches the current target position.
8. The behavior tree based ground target object automatic grabbing control method according to claim 6, wherein the step S302 comprises:
when the controlled robot is judged to reach the target position, the master controller sends a request for taking pictures to the camera and identifying the target object to each system;
after receiving a control instruction sent by the master controller, the target detection system controls the front camera to take a picture and input the taken picture into the neural network by executing the target coarse identification node, detects and identifies whether a target object exists in the picture through the neural network, and returns an identification result, wherein if the target object exists, the central coordinate of the target object is stored in a first array { centroids _ ptr } and then returns the central coordinate to the master controller;
converting the center coordinates of the target object into grid coordinates by using a conversion grid coordinate node and storing the grid coordinates in a third array { task _ info _ ptr }, and extracting the grid coordinates of the target object from the third array { task _ info _ ptr } by using a queue separation operation to form an information queue { remaining _ grids };
in the step S303, a grid coordinate of a target object is extracted from the information queue { remaining _ grids } by executing an acquisition first grid coordinate node, and then a navigation node is executed by the driving planning system to control the controlled robot to go to a position where a current grid is located, wherein in the execution process, the behavior tree continuously checks state information returned by the current node, and controls to continue executing the node or execute a next node according to the returned state information until the controlled robot reaches the target grid position.
9. The behavior tree based ground target object automatic grabbing control method according to claim 6, wherein in step S304, after the controlled robot reaches the position of the target object, the robot arm control system executes the robot arm movement to the fixed point node to control the end of the robot arm to move to the photographing position;
executing a target fine positioning node through the target detection system, controlling a binocular camera to shoot, identifying an image based on a pre-trained neural network, judging whether a target object exists in a current region, calculating the number and position of targets and detection points and grabbing points corresponding to each target object if the target object exists, storing the detection points and the grabbing points in an array { action _ queue _ ptr }, and returning data to a master controller;
in the step S305, the robot arm is controlled to move to the detection point through the robot arm control system, and whether the target object meets the condition is detected, if not, the first group of data in the array { action _ queue _ ptr } is popped up and then moves to the next detection point, if yes, the grabbing operation is performed, and in the execution process, if the current object is detected not to meet the preset requirement, the information is returned to the master controller.
10. A system for implementing the behavioral tree-based automatic grasp control method for ground target object according to any one of claims 1~9, comprising:
the driving planning system is used for controlling the controlled robot carrying the target detection system and the mechanical arm to drive to a target position through the driving planning node;
the target detection system is used for detecting a target object through the target detection node when the controlled robot runs to a target position;
the mechanical arm control system is used for controlling a mechanical arm to grab when a required target object is detected through the mechanical arm control node;
the master controller is used for managing the driving planning system, the target detection system and the mechanical arm control system in a centralized manner;
each system is provided with controllers with the same Docker container environment, each Docker container is provided with a robot operating system, and the IP addresses of the controllers are located in the same network segment.
CN202211106744.2A 2022-09-13 2022-09-13 Behavior tree-based ground target object automatic grabbing control method and system Active CN115179301B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211106744.2A CN115179301B (en) 2022-09-13 2022-09-13 Behavior tree-based ground target object automatic grabbing control method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211106744.2A CN115179301B (en) 2022-09-13 2022-09-13 Behavior tree-based ground target object automatic grabbing control method and system

Publications (2)

Publication Number Publication Date
CN115179301A true CN115179301A (en) 2022-10-14
CN115179301B CN115179301B (en) 2022-12-09

Family

ID=83524714

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211106744.2A Active CN115179301B (en) 2022-09-13 2022-09-13 Behavior tree-based ground target object automatic grabbing control method and system

Country Status (1)

Country Link
CN (1) CN115179301B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108780303A (en) * 2016-03-03 2018-11-09 玛格兹诺有限公司 The control process of robot with behavior tree framework
CN112276951A (en) * 2020-10-22 2021-01-29 中国人民武装警察部队工程大学 Unmanned search and explosion-removal robot system and working method thereof
CN113967913A (en) * 2021-10-22 2022-01-25 中冶赛迪上海工程技术有限公司 Motion planning method and system of steel grabbing device
CN114102585A (en) * 2021-11-16 2022-03-01 北京洛必德科技有限公司 Article grabbing planning method and system
CN114536333A (en) * 2022-02-18 2022-05-27 南京邮电大学 Mechanical arm task planning system based on behavior tree and application method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108780303A (en) * 2016-03-03 2018-11-09 玛格兹诺有限公司 The control process of robot with behavior tree framework
CN112276951A (en) * 2020-10-22 2021-01-29 中国人民武装警察部队工程大学 Unmanned search and explosion-removal robot system and working method thereof
CN113967913A (en) * 2021-10-22 2022-01-25 中冶赛迪上海工程技术有限公司 Motion planning method and system of steel grabbing device
CN114102585A (en) * 2021-11-16 2022-03-01 北京洛必德科技有限公司 Article grabbing planning method and system
CN114536333A (en) * 2022-02-18 2022-05-27 南京邮电大学 Mechanical arm task planning system based on behavior tree and application method

Also Published As

Publication number Publication date
CN115179301B (en) 2022-12-09

Similar Documents

Publication Publication Date Title
CN111897332B (en) Semantic intelligent substation robot humanoid inspection operation method and system
Newman et al. Explore and return: Experimental validation of real-time concurrent mapping and localization
CN113826051A (en) Generating digital twins of interactions between solid system parts
CN109959377A (en) A kind of robot navigation's positioning system and method
JP5429901B2 (en) Robot and information processing apparatus program
CN111421554B (en) Mechanical arm intelligent control system, method and device based on edge calculation
CN110082781A (en) Fire source localization method and system based on SLAM technology and image recognition
CN109159113B (en) Robot operation method based on visual reasoning
JP5068840B2 (en) Robot program and information processing apparatus program
CN114092537A (en) Automatic inspection method and device for electric unmanned aerial vehicle of transformer substation
WO2021036587A1 (en) Positioning method and system for electric power patrol scenario
CN116494201A (en) Monitoring integrated power machine room inspection robot and unmanned inspection method
Feng et al. S3e: A large-scale multimodal dataset for collaborative slam
CN115179301B (en) Behavior tree-based ground target object automatic grabbing control method and system
CN112223273A (en) Industrial robot visual detection and obstacle avoidance system
Bobkov et al. Vision-based navigation method for a local maneuvering of the autonomous underwater vehicle
Košecka et al. Discrete event modeling of visually guided behaviors
Wang et al. Automatic high-level motion sequencing methods for enabling multi-tasking construction robots
CN116383041A (en) Lane line fitting method and device for automatic driving simulation test
CN109977884A (en) Target follower method and device
Duong et al. AR cloud: towards collaborative augmented reality at a large-scale
Yu et al. Real-time multi-object grasp based on convolutional neural network
Xu et al. A Multi-source Information Fusion Method for Mobile Robot Visual-inertial Navigation
CN116197918B (en) Manipulator control system based on action record analysis
TWI806429B (en) Modular control system and method for controlling automated guided vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant