CN117130381A - Robot control method, device, equipment and medium - Google Patents

Robot control method, device, equipment and medium Download PDF

Info

Publication number
CN117130381A
CN117130381A CN202311065901.4A CN202311065901A CN117130381A CN 117130381 A CN117130381 A CN 117130381A CN 202311065901 A CN202311065901 A CN 202311065901A CN 117130381 A CN117130381 A CN 117130381A
Authority
CN
China
Prior art keywords
robot
target point
position information
positioning
positioning tag
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311065901.4A
Other languages
Chinese (zh)
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Blue Intelligent Technology Co ltd
Original Assignee
Nanjing Blue Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Blue Intelligent Technology Co ltd filed Critical Nanjing Blue Intelligent Technology Co ltd
Priority to CN202311065901.4A priority Critical patent/CN117130381A/en
Publication of CN117130381A publication Critical patent/CN117130381A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Manipulator (AREA)

Abstract

The invention discloses a robot control method, a device, equipment and a medium, wherein the control method comprises the following steps: collecting an environment image when the target robot is in a state of returning to the first target point; calculating first position information of a first target point in response to identifying a first positioning tag from the environmental image; the first target point is arranged corresponding to the first positioning label; performing primary pose adjustment on the robot according to the first position information; in response to the robot reaching the first target point, and identifying a second positioning tag from an environmental image acquired after reaching the position of the first target point, calculating second position information of the second target point; the first positioning label and the second positioning label are positioned on different planes; the second target point is arranged corresponding to the second positioning label; performing secondary pose adjustment on the robot according to the second position information to finish the butt joint of the robot and a second target point; the invention has the advantages of low cost, good stability and robustness.

Description

Robot control method, device, equipment and medium
Technical Field
The invention relates to the technical field of robot control, in particular to a method, a device, equipment and a medium for controlling a robot.
Background
The four-foot robot is a machine device for automatically executing work, can be controlled by human command, can autonomously run pre-programmed program content, and can assist and replace staff to carry out dangerous environments and repeat simple tasks. Existing quadruped robots, which rely on their own carrying batteries, require frequent recharging due to battery capacity limitations. Therefore, if the mobile robot is to operate independently for a long period of time, the quadruped robot needs to return to the target area autonomously for charging; traditional return schemes are realized by means of magnetic guidance, infrared or radar positioning combination, and the complexity of the system is high, so that the realization cost and the working stability are not ideal.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provide a robot control method, a device, equipment and a medium, which can solve the technical problems of high complexity, high cost and low stability of the existing scheme.
In order to achieve the above purpose, the invention is realized by adopting the following technical scheme:
in a first aspect, the present invention provides a robot control method, the control method comprising:
Collecting an environment image when the target robot is in a state of returning to the first target point;
calculating first position information of the first target point in response to identifying a first positioning tag from the environment image; the first target point is arranged corresponding to the first positioning label;
performing primary pose adjustment on the robot according to the first position information;
responsive to the robot reaching the first target point and identifying a second positioning tag from an environmental image acquired after reaching the position of the first target point, calculating second position information of a second target point; the first positioning tag and the second positioning tag are positioned on different planes; the second target point is arranged corresponding to the second positioning label;
and performing secondary pose adjustment on the robot according to the second position information to finish the butt joint of the robot and the second target point.
In one possible implementation, before the identifying the first positioning tag from the environment image, the method further includes:
calculating third position information of a third target point in response to the first positioning tag not being identified from the environment image and the auxiliary positioning tag being identified from the environment image; the auxiliary positioning tag is arranged corresponding to the third target point, the robot can acquire an environment image of the first positioning tag when the robot is at the third target point, and any two of the first positioning tag, the second positioning tag and the auxiliary positioning tag are positioned on different planes;
And performing front pose adjustment on the robot according to the third position information until the robot is adjusted to the third target point and acquiring an environment image.
In one possible implementation manner, the calculating, in response to identifying the first location tag from the environment image, the first location information of the first target point further includes:
filtering the environment image acquired in the state of returning to the first target point to obtain a first image;
performing contour detection on the first image to obtain first contour feature information of a positioning label to be identified;
matching the first profile feature information with a predefined first positioning tag template;
and/or the number of the groups of groups,
the identifying a second positioning tag from an environmental image acquired after reaching a position of the first target point in response to the robot reaching the first target point comprises:
filtering the environment image acquired in the state of reaching the first target point to obtain a second image;
performing contour detection on the second image to obtain second contour feature information of the positioning label to be identified;
matching the second profile feature information with a predefined second positioning tag template;
And/or the number of the groups of groups,
the calculating third position information of the third target point in response to the first positioning tag not being identified from the environment image and the auxiliary positioning tag being identified from the environment image further comprises;
filtering the environment image acquired in the state of returning to the first target point to obtain a third image;
performing contour detection on the third image to obtain third contour feature information of the positioning label to be identified;
and matching the third profile characteristic information with a predefined auxiliary positioning label template.
In one possible implementation of the present invention,
the robot has a first camera for acquiring an image of an environment, the calculating first position information of a first target point comprising:
extracting at least three first feature points based on the first contour information;
matching the first characteristic points with corresponding points of a three-dimensional space to obtain 2D-3D characteristic point pairs;
estimating the 2D-3D characteristic point pair to obtain the position relation of the first camera relative to the first positioning label; the positional relationship includes a rotation vector and a translation vector;
calculating the position relationship between the first target point and the robot by combining the position relationship between the first camera and the first positioning label based on the position relationship between the first positioning label and the first target point and the position relationship between the first camera and the robot;
Calculating the position information of the first target point under the world coordinate system according to the position relation between the first target point and the robot based on the position information of the robot under the world coordinate system;
and/or the number of the groups of groups,
the robot has a second camera for acquiring an image of the environment, the calculating second position information of the second target point comprising:
extracting at least three second feature points based on the second contour information;
matching the second characteristic points with corresponding points of a three-dimensional space to obtain 2D-3D characteristic point pairs;
estimating the 2D-3D characteristic point pair to obtain the position relation of the second camera relative to a second positioning label; the positional relationship includes a rotation vector and a translation vector;
calculating the position relationship between the second target point and the robot by combining the position relationship between the second camera and the second positioning label based on the position relationship between the second positioning label and the second target point and the position relationship between the second camera and the robot;
calculating the position information of the second target point under the world coordinate system according to the position relation between the second target point and the robot based on the position information of the robot under the world coordinate system;
And/or the number of the groups of groups,
the robot has a first camera for acquiring an image of an environment, the calculating third position information of a third target point comprising:
extracting at least three third feature points based on the third profile information;
matching the third characteristic points with corresponding points of a three-dimensional space to obtain 2D-3D characteristic point pairs;
estimating the 2D-3D characteristic point pair to obtain the position relation of the first camera relative to the auxiliary positioning tag; the positional relationship includes a rotation vector and a translation vector;
calculating the position relationship between the third target point and the robot by combining the position relationship between the first camera and the auxiliary positioning tag based on the position relationship between the auxiliary positioning tag and the third target point and the position relationship between the first camera and the robot;
and calculating the position information of the third target point under the world coordinate system according to the position relation between the third target point and the robot based on the position information of the robot under the world coordinate system.
In one possible implementation of the present invention,
the front pose adjustment comprises the step of controlling the robot to move to the third target point by adopting a path planning algorithm based on the third target point and the position information of the robot;
The initial pose adjustment comprises the step of controlling the robot to move to the first target point by adopting a path planning algorithm based on the first target point and the position information of the robot;
the secondary pose adjustment comprises the step of controlling the robot to navigate to move to the second target point by adopting a path planning algorithm based on the second target point and the position information of the robot.
In one possible implementation, the controlling the robot navigation movement further includes:
in the moving process of the robot, collecting environment perception information and generating road condition information;
based on the road condition information, a motion control algorithm is adopted to generate a speed gesture control instruction;
and controlling the speed and the gesture of the robot in the moving process based on the speed and gesture control instruction.
In one possible implementation, the robot enters a state of returning to the first target point when it self-triggers an internal return instruction or a received external return instruction.
In still another aspect, there is provided a robot control device including:
the image acquisition module acquires an environment image when the target robot is in a state of returning to the first target point;
A first target point confirmation module that calculates first position information of the first target point in response to the first positioning tag being identified from the environment image; the first target point is arranged corresponding to the first positioning label;
the first pose adjusting module is used for carrying out first pose adjustment on the robot according to the first position information;
a second target point confirmation module, which responds to the robot reaching the first target point, and identifies a second positioning label from the environment image acquired after the robot reaches the position of the first target point, and calculates second position information of the second target point; the first positioning tag and the second positioning tag are positioned on different planes; the second target point is arranged corresponding to the second positioning label;
and the secondary pose adjustment module is used for carrying out secondary pose adjustment on the robot according to the second position information so as to finish the butt joint of the robot and the second target point.
In yet another aspect, a computer device is provided, the computer device including a processor and a memory having at least one instruction stored therein, the at least one instruction being loaded and executed by the processor to implement the robot control method described above.
In yet another aspect, a computer readable storage medium is provided, on which a computer program is stored, characterized in that the program, when executed by a processor, implements the robot control method described above.
Compared with the prior art, the technical scheme of the application can have the following beneficial effects:
the application provides a robot control method, which is characterized in that a first target point and a second target point are predefined, and based on a first positioning label and a second positioning label which are arranged on different planes, the first target point and the first positioning label are correspondingly arranged, and the second target point and the second positioning label are correspondingly arranged; when the robot is in a state of returning to the first target point, identifying the position of the first positioning label by collecting surrounding environment images, driving the robot to find the first target point, and adjusting the first target point by the initial pose; after the robot is positioned at the first target point, identifying a second positioning label through an image, searching for the second target point, performing secondary pose adjustment to reach the second target point, and finally, completing the docking with the second target point; by identifying each positioning label one by one, the pose of the positioning label is gradually adjusted, and finally, the positioning label is accurately returned to the second target point to finish the butt joint; the reliability of the robot in place is improved; meanwhile, the positioning label and the target point are flexibly and conveniently set, can adapt to different working requirements, and have higher robustness.
Drawings
Fig. 1 is a block diagram of a robot control system according to an embodiment of the present invention;
fig. 2 is a schematic position diagram of a monocular vision camera according to an embodiment of the present invention;
FIG. 3A is a schematic diagram of a charging pile according to an embodiment of the present invention;
FIG. 3B is a schematic view of a house with charging piles according to an embodiment of the present invention;
fig. 4 is a flow chart of a robot control method according to an embodiment of the present invention;
FIG. 5A is a schematic diagram of a robot recognition first positioning tag according to an embodiment of the present invention
Fig. 5B is a schematic diagram of a position of a robot reaching a first target point according to an embodiment of the present invention;
fig. 5C is a schematic diagram of a position of a robot reaching a second target point according to an embodiment of the present invention;
fig. 6A is a schematic diagram of a position of a robot reaching a third target point according to an embodiment of the present invention;
FIG. 6B is a schematic diagram of a robot reaching a first target point in a room according to an embodiment of the present invention;
FIG. 6C is a schematic diagram of a robot reaching a second target point in a room according to an embodiment of the present invention;
fig. 7 is a block diagram of a robot control device according to an embodiment of the present invention;
marked in the figure as:
1-charging piles; 11-pile body; 12-a first positioning tag; 13-a base; 14-a second positioning tag;
2-quadruped machine dogs; 21-monocular vision camera; 22-laser radar;
3-house body; 31-third positioning tag.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for more clearly illustrating the technical aspects of the present invention, and are not intended to limit the scope of the present invention.
As shown in fig. 1, there is shown a block diagram of a robot control system including a target robot and a target return according to an exemplary embodiment. In one implementation, as shown in fig. 2, the specific target robot is a quadruped robot dog 2. In one possible embodiment, as shown in fig. 3A, the specific target return is a charging pile 1. When charging is needed, the quadruped robot dog 2 can return to the charging pile 1 for charging; in another possible embodiment, the target return may also be a doghouse, house 3, restroom, etc., and the quadruped robot 2 may be purposefully returned to the doghouse or restroom when the user desires that the robot be retrievable into the doghouse; in other embodiments, as shown in fig. 3B, the target return may also be a doghouse with charging posts, house 3, restroom, etc. Of course, the target robot may be a robot type robot or other multi-legged robot, or the like.
Optionally, the robot comprises a data processing device and a data acquisition device. The data acquisition equipment comprises a data memory, the data acquisition equipment acquires the distance between the quadruped robot dog 2 and the charging pile 1, and after distance data are obtained, the distance data can be stored in the data memory. For example, as shown in fig. 3, the specific data acquisition device may include two monocular vision cameras 21 disposed on the head and abdomen, although in other alternative embodiments, monocular vision cameras may be replaced with multiple vision cameras.
Alternatively, the data processing device may be a computer device with high computing power, for example, may be a control device on the four-foot robot dog 2, and the data processing device is used for analyzing the acquired distance data, azimuth information and the like between the four-foot robot dog 2 and the charging pile 1, so as to control the four-foot robot dog 2 to move toward the charging pile 1, so as to realize the following steps: an automatic charging function of the four-legged robot dog 2 through the charging stake 1, and the like.
Optionally, the four-foot robot dog 2 and the charging pile 1 can also be in communication connection through a wireless network, such as bluetooth connection, and the distance and the position relationship between the two are determined through the communication connection.
As shown in fig. 4, which is a flowchart of a robot control method according to a specific exemplary embodiment, the robot control method is performed by a computer device, which may be a data processing device, and the robot control method includes the steps of:
step 401, collecting an environment image when the target robot is in a state of returning to a first target point;
optionally, the first target point is a specific spatial area where the robot is expected to reach, before reaching the first target point, the environmental image is an image captured by a first camera of the robot head, and based on a first positioning tag identified in the environmental image, the capturing of the environmental image may be captured by other cameras at this time, which is not particularly limited; the robot enters a state of returning to a first target point when the robot self-triggers an internal return instruction or receives an external return instruction; in an actual use scenario, the self-triggering internal return instruction may be automatically triggered by the robot according to an internal program, for example, the battery power of the robot is lower than a preset power, the robot completes a current task, etc., and the self-triggering internal return instruction is generated; the external return instruction may be an external return instruction that the robot receives through communication, for example, an external return instruction issued by a user through a remote control device, voice, or the like.
Step 402, in response to identifying the first positioning label from the environment image, calculating first position information of the first target point; the first target point is arranged corresponding to the first positioning label;
alternatively, the first positioning tag may be a two-dimensional code or other specific graphic structure, for example, the two-dimensional code may be identified by an image identification algorithm. The collected image can be processed and analyzed through an image processing algorithm by identifying the first positioning label in the image, so that the specific position of the robot relative to the first positioning label is calculated and determined, and further, the first position information of the first target point is obtained. Meanwhile, a corresponding relation exists between the first target point and the first positioning label, namely, the first position information of the first target point is calculated according to the position information of the first label recognized by the robot.
Step 403, performing primary pose adjustment on the robot according to the first position information;
optionally, a specific path reaching the first target point is planned according to the pose of the robot relative to the first positioning tag, and the robot reaches the first target point according to the planned path so as to realize primary pose adjustment. The robot may go through forward, backward, steering, etc. when reaching the first target point.
Step 404, responding to the robot reaching the first target point, identifying a second positioning label from the environment image acquired after reaching the position of the first target point, and calculating second position information of a second target point; the first positioning tag and the second positioning tag are positioned on different planes; the second target point is arranged corresponding to the second positioning label;
optionally, the second target point may be a final charging position, or a final desired robot dog position for the user, or a standing position for the humanoid robot.
Optionally, after reaching the first target point, the collected environmental image is an image captured by a second camera of the abdomen of the robot dog, and the second camera recognizes a second positioning tag from the environmental image, and generally, the second positioning tag is smaller than the first positioning tag, so that the second positioning tag is convenient for close-range recognition, although the collection of the environmental image at this time may also be captured by other cameras at other positions, which is not particularly limited. The second positioning label in the image is identified, and the collected image can be processed and analyzed through an image processing algorithm, so that the specific position of the robot relative to the second positioning label is calculated and determined, and further second position information of a second target point is obtained.
Step 405, performing secondary pose adjustment on the robot according to the second position information, so as to complete the docking of the robot with the second target point;
optionally, a specific path of the robot reaching the second target point is planned according to the pose of the robot relative to the second positioning tag, and the robot reaches the second target point according to the planned path so as to realize secondary pose adjustment. The robot can go forward, backward, lie prone, bend limbs, etc. when reaching the second target point.
In summary, the first target point and the second target point are predefined, and based on the first positioning tag and the second positioning tag which are arranged on different planes, the first target point and the first positioning tag are correspondingly arranged, and the second target point and the second positioning tag are correspondingly arranged; when the robot is in a state of returning to the first target point, identifying the position of the first positioning label by collecting surrounding environment images, driving the robot to find the first target point, and adjusting the first target point by the initial pose; after the robot is positioned at the first target point, identifying a second positioning label through an image, searching for the second target point, performing secondary pose adjustment to reach the second target point, and finally, completing the docking with the second target point; by identifying each positioning label one by one, the pose of the positioning label is gradually adjusted, and finally, the positioning label is accurately returned to the second target point to finish the butt joint; the reliability of the robot in place is improved; meanwhile, the positioning label and the target point are flexibly and conveniently set, can adapt to different working requirements, and have higher robustness.
As shown in fig. 5A, 5B and 5C, the process of returning to the charging after the quadruped robot dog receives a return instruction from a remote location is shown in sequence, specifically including the following steps:
step 501, collecting an environment image when a target robot is in a state of returning to a first target point;
alternatively, in one embodiment, as shown in fig. 2 and 5A, the charging pile 1 includes a pile body 11 and a seat body 13, the pile body 11 and the seat body 13 are disposed perpendicular to each other, a first positioning label 12 is disposed on a front surface of the pile body 11, a second positioning label 14 is disposed on a front upper surface of the seat body 13, a first target point is disposed directly above the seat body 13, and a second target point is disposed on a charging interface of the seat body 13.
Alternatively, in other optional embodiments, the angle between the first positioning tag and the second positioning tag may be any angle, or even planes parallel to each other, where the robot may only need to implement the acquisition of the first positioning tag by setting a camera or a camera on the robot.
Step 502, identifying a first positioning label from an environment image;
step 5021, filtering an environment image acquired in a state of returning to a first target point to obtain a first image; specifically, filtering an environment image to be identified by adopting a Gaussian denoising algorithm, and performing binarization processing after the filtering processing to obtain a first image; noise removal helps to improve the accuracy of subsequent processing steps, particularly in edge detection and feature extraction tasks; binarization can provide better contrast and feature visibility;
Step 5022, performing contour detection on the first image to obtain first contour feature information of the positioning label to be identified; performing contour detection on the environment image subjected to binarization processing to obtain contour information of a positioning label to be identified, and extracting feature information from the contour information by adopting a polygon approximation algorithm; characteristic information such as: the shape, area, etc. of the outline can be selected as required to be used as the characteristic information of actual use;
step 5023, matching the first profile feature information with a predefined first positioning label template; acquiring a predefined positioning label template, wherein the positioning label template comprises characteristic information and ID codes of positioning labels, and matching the characteristic information with the characteristic information of the positioning labels to acquire the ID codes of first positioning labels; the ID codes are unique and are used for distinguishing each positioning tag; the characteristic information of the first positioning label template is matched with the characteristic information of the positioning label, and a least square fitting result can be adopted; after matching, the positioning tag is identified according to the ID code.
In response to identifying the first positioning tag from the environment image, first position information of the first target point is calculated, step 503.
Optionally, after the first positioning tag is identified in the environment image, that is, when the tag with the same feature information as the first positioning tag template is acquired from the first image, the position information of the first target point is calculated. The specific steps of calculating the first position information of the first target point are as follows:
a step 5031 of extracting at least three first feature points based on the first contour information; and extracting a first characteristic point from the fitted first contour by adopting a corner detection algorithm. All the third feature points have predefined corresponding three-dimensional coordinates in three-dimensional space.
Step 5032, matching the first feature point with a corresponding point in the three-dimensional space to obtain a 2D-3D feature point pair; two-dimensional coordinates and three-dimensional coordinates of the same first feature point form a 2D-3D feature point pair;
step 5033, estimating the 2D-3D feature point pair to obtain the position relation of the first camera relative to the first positioning label; the positional relationship includes a rotation vector and a translation vector; specifically, the rotation vector and translation vector of the camera are estimated using PnP algorithm, so that projection errors, i.e., re-projection errors in which feature points in three-dimensional space are projected onto an image plane, are minimized, and the process can be solved using iterative optimization methods (Levenberg-Marquardt). And after the algorithm iteration is finished, obtaining an estimated rotation vector and a translation vector, and representing the gesture of the first camera. The obtained pose result is converted into a first camera coordinate system, and the pose information in the camera coordinate system is applied with an inverse transformation to obtain the rotational and translational pose of the first positioning tag in the first camera coordinate system.
Step 5034, calculating the position relationship between the first target point and the robot based on the position relationship between the first positioning tag and the first target point and the position relationship between the first camera and the robot, in combination with the position relationship between the first camera and the first positioning tag;
step 5035, calculating the position information of the first target point under the world coordinate system according to the position relation between the first target point and the robot based on the position information of the robot under the world coordinate system;
step 504, performing primary pose adjustment on the robot according to the first position information;
optionally, the initial pose adjustment comprises controlling the robot to navigate to move to the first target point by adopting a path planning algorithm based on the first target point and the position information of the robot;
specifically, controlling the robotic navigation to move to the first target point includes the steps of:
step 5041, collecting environmental perception information during the movement of the quadruped robot dog 2;
step 5042, based on the environmental perception information, constructing an environmental map of the moving area of the quadruped robot dog 2 by adopting a map construction algorithm; in the present embodiment, the four-legged machine dog 2 adopts the laser radar 22 and the odometer to collect laser radar data and mileage data as environmental perception information; the laser radar is arranged on the back of the four-foot machine dog 2, and the odometer is arranged in the four-foot machine dog 2; and adopting a SLAM algorithm to fuse the laser radar data and the mileage data to construct an environment map, wherein the environment map comprises distance and direction information of surrounding obstacles.
Step 5043, determining the position information of the first target point and the quadruped robot dog 2 on the environment map;
step 5044, based on the position information of the first target point and the quadruped robot dog 2, performing path planning by using the path planning algorithm with the first target point and the quadruped robot dog 2 as a destination and an origin, obtaining an optimal path, and controlling the robot to move along the optimal path.
In this embodiment, the path planning algorithm adopts an a-Star algorithm, and evaluates the path and selects the optimal path at the cost of the distance to the destination and the avoidance degree of the obstacle. In other alternative embodiments, the Dijkstra algorithm may also be used for path planning.
In order to ensure stable movements of the four-legged robot dog 2 in different obstacle environments, optionally, the controlling the robot navigation movements further comprises:
step 5045, collecting environment perception information and generating road condition information in the moving process of the quadruped robot dog 2;
step 5046, based on the road condition information, generating a speed gesture control instruction by adopting a motion control algorithm;
step 5047, controlling the speed and posture of the quadruped robot dog 2 in the moving process based on the speed and posture control instruction.
Step 505, in response to the robot reaching the first target point, and identifying the second positioning tag from the environmental image acquired after reaching the position of the first target point, calculating second position information of the second target point;
As shown in fig. 5B, the robot is above the charging pile seat, and the robot has a second camera, the second camera is disposed on the abdomen of the quadruped robot dog, the second target point is actually a preset charging position, the second camera is used for acquiring an environmental image, so as to obtain a second positioning tag from the acquired environmental image, and specifically, the second camera is a monocular camera.
At this time, in response to the robot reaching the first target point, and the second positioning tag is identified from the environmental image acquired after reaching the position of the first target point, the method specifically includes the following steps:
step 5051, filtering an environment image acquired in a state of reaching a first target point to obtain a second image;
step 5052, performing contour detection on the second image to obtain second contour feature information of the positioning label to be identified;
step 5053, matching the second profile feature information with a predefined second positioning label template;
the specific manner of steps 5051 to 5053 may be performed with reference to steps 5021 to 5023.
Then, calculating the second position information of the second target point includes the steps of:
step 5054, extracting at least three second feature points based on the second contour information;
Step 5055, matching the second feature point with a corresponding point in the three-dimensional space to obtain a 2D-3D feature point pair;
step 5056, estimating the 2D-3D feature point pair to obtain a position relation of the second camera relative to the second positioning label; the positional relationship includes a rotation vector and a translation vector;
step 5057, calculating the position relationship between the second target point and the robot based on the position relationship between the second positioning tag and the second target point and the position relationship between the second camera and the robot, in combination with the position relationship between the second camera and the second positioning tag;
step 5058, calculating the position information of the second target point under the world coordinate system according to the position relation between the second target point and the robot based on the position information of the robot under the world coordinate system;
the steps 5054-5058 may be performed with reference to the steps 5031-5035.
Step 506, performing secondary pose adjustment on the robot according to the second position information to complete the docking of the robot with the second target point;
alternatively, as shown in fig. 5C, fig. 5C is a schematic view of the pose of the quadruped robot dog 2 finally lying on the seat 13. For the docking of the robot with the second target point, specific steps in step 506 may refer to steps 5041-5047 to implement secondary pose adjustment.
As shown in fig. 6A, 6B and 6C, after receiving a return instruction from a remote position, the quadruped robot dog sequentially displays a final return charging process, wherein a charging pile 1 is arranged in a house 3, a first positioning tag 12 is arranged on a pile 11, a second positioning tag 14 is arranged on a seat 13, and under some viewing angles, the house 3 shields and shields the first positioning tag 12 and the second positioning tag 14, so that the quadruped robot dog 2 cannot directly observe the first positioning tag 12 and the second positioning tag 14 in the house 3 under some viewing angles. In order to ensure that the robot can reliably position and return to the second target point, an auxiliary positioning tag is arranged outside the house, specifically, the house is usually backed against a side wall surface during installation, so in this embodiment, the auxiliary positioning tag is a third positioning tag 31 and a fourth positioning tag (not shown in the figure, the specific position is arranged at the other side opposite to the third positioning tag) which are arranged at two sides of the house respectively, of course, in other alternative embodiments, three or more groups of positioning tags can be arranged, as long as the robot can accurately find the first positioning tag by identifying the auxiliary positioning tag, the position of the auxiliary positioning tag can be selected according to the actual use requirement, and only one auxiliary positioning tag can be arranged, for example, if the house has two surfaces arranged on the wall surface, the positioning of the robot can be realized only by arranging the third positioning tag at this time. At this time, the robot control method includes the steps of:
Step 601, collecting an environment image when a target robot is in a state of returning to a first target point;
step 602, in response to the first positioning tag not being identified from the environment image and the auxiliary positioning tag being identified from the environment image, calculating third position information of a third target point; the auxiliary positioning tag is arranged corresponding to the third target point, and the robot can acquire an environment image of the first positioning tag when the robot is at the third target point;
step 6021, filtering the environment image acquired in the state of returning to the first target point to obtain a third image;
step 6022, performing contour detection on the third image to obtain third contour feature information of the positioning label to be identified;
step 6023, matching the third profile feature information with a predefined auxiliary positioning tag template;
the identification manners of the auxiliary positioning tags in step 6021, step 6022 and step 6023 may refer to the identification manners of the first positioning tag in steps 5021-5023.
Step 6024, extracting at least three third feature points based on the third profile information;
step 6025, matching the third feature point with a corresponding point of the three-dimensional space to obtain a 2D-3D feature point pair;
Step 6026, estimating the 2D-3D characteristic point pairs to obtain the position relation of the first camera relative to the auxiliary positioning tag; the positional relationship includes a rotation vector and a translation vector;
step 6027, calculating the position relationship between the third target point and the robot by combining the position relationship between the first camera and the auxiliary positioning tag based on the position relationship between the auxiliary positioning tag and the third target point and the position relationship between the first camera and the robot;
step 6028, calculating the position information of the third target point under the world coordinate system according to the position relation between the third target point and the robot based on the position information of the robot under the world coordinate system.
The steps 6024-6028 are performed with reference to the steps 5031-5035.
And 603, performing front pose adjustment on the robot according to the third position information until the robot is adjusted to a third target point and acquiring an environment image.
And performing front pose adjustment on the robot according to the third position information until the robot is adjusted to a third target point, wherein the third target point is opposite to a certain position of the door of the room body in the embodiment, and of course, in other optional embodiments, the third target point may also be not opposite to a certain position of the door, so long as the robot can be guaranteed to completely identify the first positioning tag when the robot is in the third target point. The front pose adjustment of the robot comprises the steps of controlling the robot to move to the third target point by adopting a path planning algorithm based on the third target point and the position information of the robot; the above method can adopt the steps of
While after the robot is at the third target point, the robot continues to perform the following steps 5041-5047 to achieve the front pose adjustment:
step 6031, filtering the environment image acquired in the state of returning to the first target point to obtain a first image;
step 6032, performing contour detection on the first image to obtain first contour feature information of the positioning label to be identified;
step 6033, matching the first profile feature information with a predefined first positioning label template;
the above-mentioned step 6031-step 6033 may refer to the identification method of the first positioning label from step 5021-step 5023.
In response to identifying the first location tag from the ambient image, first location information of the first target point is calculated, step 604.
Step 605, performing primary pose adjustment on the robot according to the first position information;
step 606, in response to the robot reaching the first target point, and identifying the second positioning tag from the environmental image acquired after reaching the position of the first target point, calculating second position information of the second target point;
Step 607, performing secondary pose adjustment on the robot according to the second position information, so as to complete the docking of the robot with the second target point;
steps 604-607 may be implemented with reference to steps 503-506, and are not described in detail herein.
In summary, the first target point, the second target point and the third target point are predefined, and based on the first positioning label, the second positioning label and the auxiliary positioning label which are arranged on different planes, the first target point and the first positioning label are correspondingly arranged, the second target point and the second positioning label are correspondingly arranged, and the auxiliary positioning label and the third target point are correspondingly arranged; when the robot is in a state of returning to the first target point, if the first positioning label is not acquired in the surrounding environment and the auxiliary positioning label is acquired, the robot carries out front position posture adjustment and goes to a third target point, and when the robot reaches the third target point and recognizes the position of the first positioning label through acquiring the surrounding environment image, the robot is driven to find the first target point and reaches the first target point through primary position posture adjustment; after the robot is positioned at the first target point, identifying a second positioning label through an image, searching for the second target point, performing secondary pose adjustment to reach the second target point, and finally, completing the docking with the second target point; by identifying each positioning label one by one, the pose of the positioning label is gradually adjusted, and finally, the positioning label is accurately returned to the second target point to finish the butt joint; the reliability of the robot in place is improved; the setting of location label and target point is nimble convenient, can adapt to different work needs, has higher robustness, simultaneously, to in some circumstances, when the robot can't detect first label, through the setting of auxiliary positioning label and third target point for the robot can be at first through auxiliary positioning label, goes to the third target point that can detect first positioning label, in order to solve above-mentioned problem, further improves the environment adaptation degree.
As shown in fig. 7, there is shown a block diagram of a robot control device according to an exemplary embodiment; the robot control device includes:
the image acquisition module acquires an environment image when the target robot is in a state of returning to the first target point;
a first target point confirmation module that calculates first position information of the first target point in response to the first positioning tag being identified from the environment image; the first target point is arranged corresponding to the first positioning label;
the first pose adjusting module is used for carrying out first pose adjustment on the robot according to the first position information;
a second target point confirmation module, which responds to the robot reaching the first target point, and identifies the second positioning label from the environment image acquired after the robot reaches the position of the first target point, and calculates second position information of the second target point; the first positioning tag and the second positioning tag are positioned on different planes; the second target point is arranged corresponding to the second positioning label;
and the secondary pose adjustment module is used for carrying out secondary pose adjustment on the robot according to the second position information so as to finish the butt joint of the robot and the second target point.
In one possible implementation, before the identifying the first positioning tag from the environment image, the method further includes:
calculating third position information of a third target point in response to the first positioning tag not being identified from the environment image and the auxiliary positioning tag being identified from the environment image; the auxiliary positioning tag is arranged corresponding to the third target point, the robot can acquire an environment image of the first positioning tag when the robot is at the third target point, and any two of the first positioning tag, the second positioning tag and the auxiliary positioning tag are positioned on different planes;
and performing front pose adjustment on the robot according to the third position information until the robot is adjusted to a third target point and acquiring an environment image.
In one possible implementation manner, the calculating, in response to identifying the first location tag from the environment image, the first location information of the first target point further includes:
filtering an environment image acquired in a state of returning to a first target point to obtain a first image;
performing contour detection on the first image to obtain first contour feature information of the positioning label to be identified;
Matching the first profile feature information with a predefined first positioning tag template;
and/or the number of the groups of groups,
the identifying the second positioning tag from the environmental image acquired after reaching the position of the first target point in response to the robot reaching the first target point comprises:
filtering the environment image acquired in the state of reaching the first target point to obtain a second image;
performing contour detection on the second image to obtain second contour feature information of the positioning label to be identified;
matching the second profile feature information with a predefined second positioning tag template;
and/or the number of the groups of groups,
the calculating third location information of the third target point in response to the first location tag not being identified from the environmental image and the auxiliary location tag being identified from the environmental image further comprises;
filtering the environment image acquired in the state of returning to the first target point to obtain a third image;
performing contour detection on the third image to obtain third contour feature information of the positioning label to be identified;
and matching the third profile characteristic information with a predefined auxiliary positioning label template.
In one possible implementation, the robot has a first camera for acquiring an environmental image, and the calculating the first position information of the first target point includes:
extracting at least three first feature points based on the first contour information;
matching the first characteristic points with corresponding points of a three-dimensional space to obtain 2D-3D characteristic point pairs;
estimating the 2D-3D characteristic point pair to obtain the position relation of the first camera relative to the first positioning label; the positional relationship includes a rotation vector and a translation vector;
calculating the position relationship between the first target point and the robot by combining the position relationship between the first camera and the first positioning label based on the position relationship between the first positioning label and the first target point and the position relationship between the first camera and the robot;
calculating the position information of the first target point under the world coordinate system according to the position relation between the first target point and the robot based on the position information of the robot under the world coordinate system;
and/or the number of the groups of groups,
the robot has a second camera for acquiring an image of the environment, the calculating second position information of the second target point comprising:
Extracting at least three second feature points based on the second contour information;
matching the second characteristic points with corresponding points of a three-dimensional space to obtain 2D-3D characteristic point pairs;
estimating the 2D-3D characteristic point pair to obtain the position relation of the second camera relative to the second positioning label; the positional relationship includes a rotation vector and a translation vector;
calculating the position relationship between the second target point and the robot by combining the position relationship between the second camera and the second positioning label based on the position relationship between the second positioning label and the second target point and the position relationship between the second camera and the robot;
calculating the position information of the second target point under the world coordinate system according to the position relation between the second target point and the robot based on the position information of the robot under the world coordinate system;
and/or the number of the groups of groups,
the robot has a first camera for acquiring an image of an environment, the calculating third position information of a third target point comprises:
extracting at least three third feature points based on the third profile information;
matching the third characteristic points with corresponding points of a three-dimensional space to obtain 2D-3D characteristic point pairs;
Estimating the 2D-3D characteristic point pair to obtain the position relation of the first camera relative to the auxiliary positioning tag; the positional relationship includes a rotation vector and a translation vector;
calculating the position relationship between the third target point and the robot by combining the position relationship between the first camera and the auxiliary positioning tag based on the position relationship between the auxiliary positioning tag and the third target point and the position relationship between the first camera and the robot;
and calculating the position information of the third target point under the world coordinate system according to the position relation between the third target point and the robot based on the position information of the robot under the world coordinate system.
In one possible implementation, the front pose adjustment includes controlling the robot navigation to move to the third target point using a path planning algorithm based on the third target point and the position information of the robot;
the initial pose adjustment comprises the step of controlling the robot to move to the first target point by adopting a path planning algorithm based on the first target point and the position information of the robot;
the secondary pose adjustment comprises the step of controlling the robot to navigate to move to the second target point by adopting a path planning algorithm based on the second target point and the position information of the robot.
In one possible implementation, the controlling the robot navigation movement further includes:
in the moving process of the robot, collecting environment perception information and generating road condition information;
based on the road condition information, a motion control algorithm is adopted to generate a speed gesture control instruction;
and controlling the speed and the gesture of the robot in the moving process based on the speed and gesture control instruction.
In one possible implementation, the robot enters a state of returning to the first target point when it self-triggers an internal return instruction or a received external return instruction.
The present invention is illustrated in accordance with an exemplary embodiment of a computer device comprising a processor and a memory having at least one instruction stored therein, the at least one instruction being loaded and executed by the processor to implement the robot control method described above.
The present invention is illustrated according to one exemplary embodiment as a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the robot control method described above.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing is merely a preferred embodiment of the present invention, and it should be noted that modifications and variations could be made by those skilled in the art without departing from the technical principles of the present invention, and such modifications and variations should also be regarded as being within the scope of the invention.

Claims (10)

1. A robot control method, the control method comprising:
collecting an environment image when the target robot is in a state of returning to the first target point;
calculating first position information of the first target point in response to identifying a first positioning tag from the environment image; the first target point is arranged corresponding to the first positioning label;
performing primary pose adjustment on the robot according to the first position information;
Responsive to the robot reaching the first target point and identifying a second positioning tag from an environmental image acquired after reaching the position of the first target point, calculating second position information of a second target point; the first positioning tag and the second positioning tag are positioned on different planes; the second target point is arranged corresponding to the second positioning label;
and performing secondary pose adjustment on the robot according to the second position information to finish the butt joint of the robot and the second target point.
2. The robot control method of claim 1, wherein the responding to identifying the first positioning tag from the environmental image further comprises:
calculating third position information of a third target point in response to the first positioning tag not being identified from the environment image and the auxiliary positioning tag being identified from the environment image; the auxiliary positioning tag is arranged corresponding to the third target point, the robot can acquire an environment image of the first positioning tag when the robot is at the third target point, and any two of the first positioning tag, the second positioning tag and the auxiliary positioning tag are positioned on different planes;
And performing front pose adjustment on the robot according to the third position information until the robot is adjusted to the third target point and acquiring an environment image.
3. The robot control method of claim 2, wherein the calculating the first position information of the first target point in response to identifying a first positioning tag from the environment image further comprises:
filtering the environment image acquired in the state of returning to the first target point to obtain a first image;
performing contour detection on the first image to obtain first contour feature information of a positioning label to be identified;
matching the first profile feature information with a predefined first positioning tag template;
and/or the number of the groups of groups,
the identifying a second positioning tag from an environmental image acquired after reaching a position of the first target point in response to the robot reaching the first target point comprises:
filtering the environment image acquired in the state of reaching the first target point to obtain a second image;
performing contour detection on the second image to obtain second contour feature information of the positioning label to be identified;
Matching the second profile feature information with a predefined second positioning tag template;
and/or the number of the groups of groups,
the calculating third position information of the third target point in response to the first positioning tag not being identified from the environment image and the auxiliary positioning tag being identified from the environment image further comprises;
filtering the environment image acquired in the state of returning to the first target point to obtain a third image;
performing contour detection on the third image to obtain third contour feature information of the positioning label to be identified;
and matching the third profile characteristic information with a predefined auxiliary positioning label template.
4. The method for controlling a robot according to claim 3,
the robot has a first camera for acquiring an image of an environment, the calculating first position information of a first target point comprising:
extracting at least three first feature points based on the first contour information;
matching the first characteristic points with corresponding points of a three-dimensional space to obtain 2D-3D characteristic point pairs;
estimating the 2D-3D characteristic point pair to obtain the position relation of the first camera relative to the first positioning label; the positional relationship includes a rotation vector and a translation vector;
Calculating the position relationship between the first target point and the robot by combining the position relationship between the first camera and the first positioning label based on the position relationship between the first positioning label and the first target point and the position relationship between the first camera and the robot;
calculating the position information of the first target point under the world coordinate system according to the position relation between the first target point and the robot based on the position information of the robot under the world coordinate system;
and/or the number of the groups of groups,
the robot has a second camera for acquiring an image of the environment, the calculating second position information of the second target point comprising:
extracting at least three second feature points based on the second contour information;
matching the second characteristic points with corresponding points of a three-dimensional space to obtain 2D-3D characteristic point pairs;
estimating the 2D-3D characteristic point pair to obtain the position relation of the second camera relative to a second positioning label; the positional relationship includes a rotation vector and a translation vector;
calculating the position relationship between the second target point and the robot by combining the position relationship between the second camera and the second positioning label based on the position relationship between the second positioning label and the second target point and the position relationship between the second camera and the robot;
Calculating the position information of the second target point under the world coordinate system according to the position relation between the second target point and the robot based on the position information of the robot under the world coordinate system;
and/or the number of the groups of groups,
the robot has a first camera for acquiring an image of an environment, the calculating third position information of a third target point comprising:
extracting at least three third feature points based on the third profile information;
matching the third characteristic points with corresponding points of a three-dimensional space to obtain 2D-3D characteristic point pairs;
estimating the 2D-3D characteristic point pair to obtain the position relation of the first camera relative to the auxiliary positioning tag; the positional relationship includes a rotation vector and a translation vector;
calculating the position relationship between the third target point and the robot by combining the position relationship between the first camera and the auxiliary positioning tag based on the position relationship between the auxiliary positioning tag and the third target point and the position relationship between the first camera and the robot;
and calculating the position information of the third target point under the world coordinate system according to the position relation between the third target point and the robot based on the position information of the robot under the world coordinate system.
5. The method for controlling a robot according to claim 2, wherein,
the front pose adjustment comprises the step of controlling the robot to move to the third target point by adopting a path planning algorithm based on the third target point and the position information of the robot;
the initial pose adjustment comprises the step of controlling the robot to move to the first target point by adopting a path planning algorithm based on the first target point and the position information of the robot;
the secondary pose adjustment comprises the step of controlling the robot to navigate to move to the second target point by adopting a path planning algorithm based on the second target point and the position information of the robot.
6. The robot control method of claim 5, wherein the controlling the robot navigation movements further comprises:
in the moving process of the robot, collecting environment perception information and generating road condition information;
based on the road condition information, a motion control algorithm is adopted to generate a speed gesture control instruction;
and controlling the speed and the gesture of the robot in the moving process based on the speed and gesture control instruction.
7. The robot control method according to claim 1, wherein the robot enters a state of returning to the first target point upon self-triggering of an internal return instruction or a received external return instruction.
8. A robot control device, comprising:
the image acquisition module acquires an environment image when the target robot is in a state of returning to the first target point;
a first target point confirmation module that calculates first position information of the first target point in response to the first positioning tag being identified from the environment image; the first target point is arranged corresponding to the first positioning label;
the first pose adjusting module is used for carrying out first pose adjustment on the robot according to the first position information;
a second target point confirmation module, which responds to the robot reaching the first target point, and identifies a second positioning label from the environment image acquired after the robot reaches the position of the first target point, and calculates second position information of the second target point; the first positioning tag and the second positioning tag are positioned on different planes; the second target point is arranged corresponding to the second positioning label;
and the secondary pose adjustment module is used for carrying out secondary pose adjustment on the robot according to the second position information so as to finish the butt joint of the robot and the second target point.
9. A computer device comprising a processor and a memory having stored therein at least one instruction that is loaded and executed by the processor to implement the robot control method of any of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the robot control method according to any one of claims 1-7.
CN202311065901.4A 2023-08-22 2023-08-22 Robot control method, device, equipment and medium Pending CN117130381A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311065901.4A CN117130381A (en) 2023-08-22 2023-08-22 Robot control method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311065901.4A CN117130381A (en) 2023-08-22 2023-08-22 Robot control method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN117130381A true CN117130381A (en) 2023-11-28

Family

ID=88850164

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311065901.4A Pending CN117130381A (en) 2023-08-22 2023-08-22 Robot control method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN117130381A (en)

Similar Documents

Publication Publication Date Title
US11215461B1 (en) Method for constructing a map while performing work
CN110023867B (en) System and method for robotic mapping
CN108051002B (en) Transport vehicle space positioning method and system based on inertial measurement auxiliary vision
CN108290294B (en) Mobile robot and control method thereof
Lingemann et al. High-speed laser localization for mobile robots
Chen Kalman filter for robot vision: a survey
Zaman et al. ROS-based mapping, localization and autonomous navigation using a Pioneer 3-DX robot and their relevant issues
US6496755B2 (en) Autonomous multi-platform robot system
CN111496770A (en) Intelligent carrying mechanical arm system based on 3D vision and deep learning and use method
CN109202885B (en) Material carrying and moving composite robot
Xu et al. SLAM of Robot based on the Fusion of Vision and LIDAR
MXPA03004102A (en) Autonomous multi-platform robot system.
CN105717928A (en) Vision-based robot navigation door-passing method
CN110986920B (en) Positioning navigation method, device, equipment and storage medium
CN114102585B (en) Article grabbing planning method and system
CN109933061A (en) Robot and control method based on artificial intelligence
WO2023025028A1 (en) Charging method, charging apparatus, and robot
CN111679664A (en) Three-dimensional map construction method based on depth camera and sweeping robot
Tavakoli et al. Cooperative multi-agent mapping of three-dimensional structures for pipeline inspection applications
Li et al. A mobile robotic arm grasping system with autonomous navigation and object detection
Stachniss et al. How to learn accurate grid maps with a humanoid
Song et al. Autonomous docking in a human-robot collaborative environment of automated guided vehicles
CN112182122A (en) Method and device for acquiring navigation map of working environment of mobile robot
CN117130381A (en) Robot control method, device, equipment and medium
US11865724B2 (en) Movement control method, mobile machine and non-transitory computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination