CN117519164A - Navigation method of movable robot and movable robot - Google Patents

Navigation method of movable robot and movable robot Download PDF

Info

Publication number
CN117519164A
CN117519164A CN202311528329.0A CN202311528329A CN117519164A CN 117519164 A CN117519164 A CN 117519164A CN 202311528329 A CN202311528329 A CN 202311528329A CN 117519164 A CN117519164 A CN 117519164A
Authority
CN
China
Prior art keywords
point cloud
reference object
current
tracked
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311528329.0A
Other languages
Chinese (zh)
Inventor
林辉
卢维
李翔
尹春辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Huaray Technology Co Ltd
Original Assignee
Zhejiang Huaray Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Huaray Technology Co Ltd filed Critical Zhejiang Huaray Technology Co Ltd
Priority to CN202311528329.0A priority Critical patent/CN117519164A/en
Publication of CN117519164A publication Critical patent/CN117519164A/en
Pending legal-status Critical Current

Links

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application discloses a navigation method of a movable robot and the movable robot, wherein the navigation method of the movable robot comprises the following steps: performing laser scanning on an environment containing a reference object to obtain current point cloud data, and acquiring a tracking record of the reference object; performing point cloud extraction on the current point cloud data according to the reflection intensity information of the current point cloud data to obtain an initial reference point cloud and a candidate reference point cloud; verifying whether the candidate reference point cloud is a point cloud reflected by the reference object or not by utilizing the position of the reference object and the position of the candidate reference point cloud in the reference object tracking record, and taking the verified candidate reference point cloud as a supplementary reference point cloud; and calculating the current pose information of the reference object by combining the initial reference object point cloud and the supplementary reference object point cloud, and navigating the movable robot according to the current pose information of the reference object. The condition that the extraction of the reference object point cloud is inaccurate is avoided, and the accuracy of navigating the movable robot according to the current pose information of the reference object is improved.

Description

Navigation method of movable robot and movable robot
Technical Field
The present disclosure relates to the field of autonomous navigation, and in particular, to a method for navigating a mobile robot and a mobile robot.
Background
The movable robot refers to a robot having autonomous motion and environmental sensing capabilities, such as a logistics robot, a tour robot, a sweeping robot, and the like.
In a robot system, a navigation technology is one of the most main core technologies, and the navigation technology mainly determines current pose information of a robot through a sensor technology and determines a running route of the robot according to destination information. At present, the robot navigation technology can be divided into two types of methods: one is to rely on global map and global positioning only to direct the mobile robot to travel to the target location; the other is to preset a reference object, and dynamically plan a navigation path by the position and the gesture of the reference object relative to the movable robot. The navigation by means of the reference object has higher stability, accuracy and flexibility.
However, in the navigation process according to the reference object, the angle of the movable robot when scanning the reference object is not good, so that the scanning result corresponding to the reference object is affected, and the navigation effect of the movable robot is further affected.
Disclosure of Invention
In order to solve the above problems, the present application provides at least a navigation method of a mobile robot and a mobile robot.
The first aspect of the application provides a navigation method of a movable robot, which comprises the following steps: performing laser scanning on an environment containing a reference object to obtain current point cloud data, and acquiring a tracking record of the reference object; the reference object tracking record is used for recording the position of a reference object obtained according to the historical point cloud data, and the acquisition time of the historical point cloud data is earlier than the acquisition time of the current point cloud data; performing point cloud extraction on the current point cloud data according to the reflection intensity information of the current point cloud data to obtain an initial reference point cloud and a candidate reference point cloud; verifying whether the candidate reference point cloud is a point cloud reflected by the reference object or not by utilizing the position of the reference object and the position of the candidate reference point cloud in the reference object tracking record, and taking the verified candidate reference point cloud as a supplementary reference point cloud; and calculating the current pose information of the reference object by combining the initial reference object point cloud and the supplementary reference object point cloud, and navigating the movable robot according to the current pose information of the reference object.
In one embodiment, verifying whether the candidate reference point cloud is a point cloud reflected by the reference using the position of the reference in the reference tracking record and the position of the candidate reference point cloud comprises: calculating the distance between the position of the candidate reference object point cloud and the position of the reference object in the reference object tracking record; and if the distance is within the first distance threshold range, judging that the candidate reference object point cloud is the point cloud reflected by the reference object.
In an embodiment, before calculating the distance between the position of the candidate reference point cloud and the position of the reference in the reference tracking record, the method further comprises: acquiring a coordinate system corresponding to the position of a reference object in the reference object tracking record, and obtaining a reference coordinate system; and projecting the candidate reference object point cloud to a reference coordinate system to obtain the position of the candidate reference object point cloud.
In an embodiment, performing point cloud extraction on current point cloud data according to reflection intensity information of the current point cloud data to obtain an initial reference point cloud and a candidate reference point cloud, including: acquiring a first intensity threshold range and a second intensity threshold range; extracting points with the reflection intensity in a first intensity threshold range from the current point cloud data to obtain an initial reference object point cloud; and extracting points with the reflection intensity in the second intensity threshold range from the current point cloud data to obtain candidate reference point clouds.
In an embodiment, calculating current pose information of a reference object in combination with an initial reference object point cloud and a supplemental reference object point cloud includes: fusing the initial reference object point cloud and the supplementary reference object point cloud to obtain a reference object point cloud; extracting reference line characteristics of the reference object target point cloud to obtain reference line characteristics corresponding to the reference object target point cloud; calculating current position information and current posture information of a reference object by utilizing the reference line characteristics; and obtaining the current pose information of the reference object according to the current position information and the current pose information of the reference object.
In an embodiment, navigating the movable robot according to current pose information of the reference object comprises: acquiring an initial navigation path; correcting the initial navigation path by using the current pose information of the reference object to obtain a corrected navigation path; and navigating the movable robot according to the corrected navigation path.
In an embodiment, the method further comprises: and updating the reference object tracking record according to the current pose information of the reference object.
In an embodiment, the reference object is composed of a plurality of reflective modules arranged at intervals, the current pose information contains the positions of the identified reflective modules, and the reference object tracking record contains the positions of the tracked reflective modules, the positions of the reflective modules to be tracked and the observation times records corresponding to the reflective modules to be tracked; updating a reference tracking record according to current pose information of a reference, comprising: calculating the distance between the positions of the identified reflecting modules and the positions of the tracked reflecting modules in the reference object tracking record respectively; if the distance between the identified light reflecting module and the tracked light reflecting module is not in the second distance threshold range and the distance between the identified light reflecting module and the light reflecting module to be tracked is not in the third distance threshold range, adding the identified light reflecting module as the light reflecting module to be tracked into a reference object tracking record, and initializing an observation frequency record corresponding to the identified light reflecting module; if the distance between the identified light reflecting module and the tracked light reflecting module is not in the second distance threshold range and the distance between the identified light reflecting module and the light reflecting module to be tracked is in the third distance threshold range, updating the observation times record corresponding to the light reflecting module to be tracked; and converting the reflection module to be tracked, of which the observation frequency record is larger than a preset frequency threshold value, into a tracked reflection module.
In one embodiment, verifying whether the candidate reference point cloud is a point cloud reflected by the reference using the position of the reference in the reference tracking record and the position of the candidate reference point cloud comprises: calculating the distance between the position of each tracked reflecting module in the reference object tracking record and the position of the candidate reference object point cloud; taking the tracked reflecting module with the smallest distance between the tracked reflecting module and the candidate reference object point cloud as a target tracking reflecting module; and if the distance between the candidate reference object point cloud and the target tracking reflection module is within the first distance threshold range, judging the candidate reference object point cloud as the point cloud reflected by the target tracking reflection module.
A second aspect of the present application provides a navigation device for a mobile robot, the device comprising: the data acquisition module is used for carrying out laser scanning on the environment containing the reference object to obtain current point cloud data and acquiring a tracking record of the reference object; the reference object tracking record is used for recording the position of a reference object obtained according to the historical point cloud data, and the acquisition time of the historical point cloud data is earlier than the acquisition time of the current point cloud data; the point cloud extraction module is used for extracting the point cloud of the current point cloud data according to the reflection intensity information of the current point cloud data to obtain an initial reference point cloud and a candidate reference point cloud; the verification module is used for verifying whether the candidate reference object point cloud is the point cloud reflected by the reference object or not by utilizing the position of the reference object and the position of the candidate reference object point cloud in the reference object tracking record, and taking the verified candidate reference object point cloud as a supplementary reference object point cloud; and the navigation module is used for calculating the current pose information of the reference object by combining the initial reference object point cloud and the supplementary reference object point cloud, and navigating the movable robot according to the current pose information of the reference object.
The third aspect of the application provides a movable robot, a laser scanning device and a computer program, wherein the laser scanning device is used for carrying out laser scanning on an environment containing a reference object to obtain current point cloud data; and the controller is used for realizing the navigation method of the movable robot.
A fourth aspect of the present application provides a computer readable storage medium having stored thereon program instructions which, when executed by a processor, implement the above-described navigation method of a mobile robot.
According to the scheme, the current point cloud data is obtained by carrying out laser scanning on the environment containing the reference object, and the tracking record of the reference object is obtained; the reference object tracking record is used for recording the position of a reference object obtained according to the historical point cloud data, and the acquisition time of the historical point cloud data is earlier than the acquisition time of the current point cloud data; performing point cloud extraction on the current point cloud data according to the reflection intensity information of the current point cloud data to obtain an initial reference point cloud and a candidate reference point cloud; verifying whether the candidate reference point cloud is a point cloud reflected by the reference object or not by utilizing the position of the reference object and the position of the candidate reference point cloud in the reference object tracking record, and taking the verified candidate reference point cloud as a supplementary reference point cloud; and calculating the current pose information of the reference object by combining the initial reference object point cloud and the supplementary reference object point cloud, and navigating the movable robot according to the current pose information of the reference object. The condition that the extraction of the reference object point cloud is inaccurate due to poor scanning angle or distance during laser scanning is avoided, more complete reference object point cloud data are obtained, further more accurate current pose information of the reference object is obtained through calculation, and the accuracy of navigation of the movable robot according to the current pose information of the reference object is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and, together with the description, serve to explain the technical aspects of the application.
FIG. 1 is a schematic diagram of one implementation environment involved in a method of navigating a mobile robot shown in an exemplary embodiment of the present application;
FIG. 2 is a schematic diagram illustrating a change in position between a movable robot and a reference in accordance with an exemplary embodiment of the present application;
FIG. 3 is a flow chart illustrating a method of navigation of a mobile robot according to an exemplary embodiment of the present application;
FIG. 4 is a schematic diagram of extracting an initial reference point cloud and a candidate reference point cloud, as shown in an exemplary embodiment of the present application;
FIG. 5 is a schematic illustration of a reference shown in an exemplary embodiment of the present application;
FIG. 6 is a flow chart illustrating reference tracking record updating according to an exemplary embodiment of the present application;
FIG. 7 is a schematic diagram illustrating reference trace record updates performed in accordance with an exemplary embodiment of the present application;
FIG. 8 is a schematic diagram of path navigation shown in an exemplary embodiment of the present application;
FIG. 9 is a block diagram of a navigation device of a mobile robot shown in an exemplary embodiment of the present application;
FIG. 10 is a schematic view of a movable robot shown in an exemplary embodiment of the present application;
fig. 11 is a schematic structural view of a computer-readable storage medium shown in an exemplary embodiment of the present application.
Detailed Description
The following describes the embodiments of the present application in detail with reference to the drawings.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, interfaces, techniques, etc., in order to provide a thorough understanding of the present application.
The term "and/or" is herein merely an association information describing an associated object, meaning that three relationships may exist, e.g., a and/or B may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship. Further, "a plurality" herein means two or more than two. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
The following describes a navigation method of a mobile robot provided in an embodiment of the present application.
Referring to fig. 1, fig. 1 is a schematic diagram of an implementation environment related to a navigation method of a mobile robot in the present application. As shown in fig. 1, the implementation environment includes a reference 110 and a movable robot 120.
The reference 110 may be a highly reflective material. For example, the reference object 110 is a reflector such as a reflector plate or a reflector tape.
The mobile robot 120 is deployed with a laser scanning device for detecting the reference 110. For example, the laser scanning device emits laser light to an environment containing a reference object, receives reflected light, obtains point cloud data from the received reflected light, and can recognize the position of the reference object from the inside of the point cloud data, thereby realizing path navigation.
However, when the movable robot 120 is moving, the positional relationship between the movable robot 120 and the reference object 110 is changed, and the applicant finds that, due to the characteristics of the laser, when the incident angle of the laser is too small or the laser is too close to the reference object 110, the reflection intensity of the point cloud of the laser on the reference object 110 is also drastically reduced, so that the point cloud data corresponding to the reference object 110 cannot be accurately divided, and the navigation effect of the movable robot is further affected.
For example, describing a navigation process corresponding to a docking service executed by an intelligent transfer robot, the docking service refers to a process that the intelligent transfer robot carries a load such as a shelf or a cargo, observes an environment through a sensor such as a laser radar, and plans a path, so that the load is placed at a relative position with a preset distance and an orientation of a target working machine after navigating according to the planned path, and a common docking scenario of the intelligent transfer robot includes: the production line skip butt joint conveyer belt, fork truck tray butt joint lifting machine etc. can install the reference thing at the storehouse position that target work platform corresponds this moment to improve intelligent transfer robot's navigation accuracy. Under partial scenes, the intelligent transfer robot adopts lateral butt joint when approaching to the target working machine table, and because the installation position of the laser scanning device is usually the central installation or the diagonal installation of the intelligent transfer robot, a small incidence angle is formed between library sites corresponding to the intelligent transfer robot and the target working machine table during lateral butt joint, so that point cloud data obtained by scanning cannot accurately divide a reference object. As shown in fig. 2, when the intelligent transfer robot approaches the library site corresponding to the target work table, the distance and the incident angle between the laser beam emitted from the intelligent transfer robot and the reference object are drastically reduced.
Based on the above, the present application provides a navigation method of a movable robot and a movable robot to solve the above-mentioned problems.
Referring to fig. 3, fig. 3 is a flowchart illustrating a navigation method of a mobile robot according to an exemplary embodiment of the present application. The navigation method of the movable robot may be applied to the implementation environment shown in fig. 1 and specifically performed by the movable robot in the implementation environment. It should be understood that the method may be adapted for other exemplary implementation environments and be specifically performed by devices in other implementation environments, for example, by a server communicatively coupled to a mobile robot, and that the embodiment is not limited to the implementation environments in which the method may be adapted.
Next, a navigation method of the mobile robot of the present application will be described with the mobile robot as the execution subject.
As shown in fig. 3, the navigation method of the mobile robot at least includes steps S310 to S340, and is described in detail as follows:
step S310: performing laser scanning on an environment containing a reference object to obtain current point cloud data, and acquiring a tracking record of the reference object; the reference object tracking record is used for recording the position of a reference object obtained according to the historical point cloud data, and the acquisition time of the historical point cloud data is earlier than the acquisition time of the current point cloud data.
And scanning the environment containing the reference object through a laser scanning device to obtain current point cloud data, wherein the current point cloud data contains point cloud data corresponding to the reference object.
The laser scanning device includes, but is not limited to, a helium-neon laser sensor, a semiconductor laser sensor, a solid laser sensor, a fiber laser sensor, and the like, and the type of the laser scanning device can be flexibly selected according to the actual application scene and the type of the movable robot, which is not limited in the application.
Further, a reference tracking record is obtained, the reference tracking record being used for recording the position of the reference obtained according to the historical point cloud data analysis.
The movable robot can locally store a reference object tracking record, and inquire the reference object tracking record; or the other end stores the reference object tracking record and sends a query instruction to the other end so as to obtain the reference object tracking record fed back by the other end.
For example, the position of the reference object recorded in the reference object tracking record may be coordinates in a reference coordinate system, such as coordinates of the reference object in a map coordinate system; the position of the reference object recorded in the reference object tracking record may also be a positional relationship between the reference object and the movable robot, such as a distance, a direction, or the like of the reference object with respect to the movable robot.
Step S320: and carrying out point cloud extraction on the current point cloud data according to the reflection intensity information of the current point cloud data to obtain an initial reference point cloud and a candidate reference point cloud.
It will be appreciated that, since the reference object is set differently from other objects in the environment, in general, the reflection intensity of the reference object is different from the reflection intensity of the other objects in the environment, so that the point cloud extraction can be performed on the current point cloud data according to the reflection intensity information of the current point cloud data, so as to filter the point clouds reflected by the other objects in the environment, and retain the point clouds possibly reflected by the reference object.
The position of the movable robot changes, so that the reflection intensity of the point cloud reflected by the reference object has larger fluctuation, and therefore, the candidate reference object point cloud is obtained by extracting the point cloud possibly reflected by the reference object from the current point cloud data in addition to the initial reference object point cloud obtained by extracting the point cloud determined to be reflected by the reference object from the current point cloud data, so that omission of the reference object point cloud is avoided.
For example, the extraction process of the initial reference point cloud and the candidate reference point cloud may be: and acquiring preset intensity threshold ranges corresponding to the initial reference point cloud and the candidate reference point cloud respectively, extracting point clouds with reflection intensities respectively in the preset intensity threshold ranges from the current point cloud data, and obtaining the initial reference point cloud and the candidate reference point cloud.
The preset intensity threshold range may be input by a user; the method can also be obtained by analyzing the point cloud data which is determined to be the reference object in the historical point cloud data, such as counting the maximum value and the minimum value of the points which are determined to be the reference object in the historical point cloud data, and dividing the threshold range according to the maximum value and the minimum value to obtain the preset intensity threshold range corresponding to the initial reference object point cloud and the candidate reference object point cloud respectively.
Illustrating: acquiring a first intensity threshold range and a second intensity threshold range; extracting points with the reflection intensity in a first intensity threshold range from the current point cloud data to obtain an initial reference object point cloud; and extracting points with the reflection intensity in the second intensity threshold range from the current point cloud data to obtain candidate reference point clouds.
Referring to fig. 4, fig. 4 is a schematic diagram illustrating extraction of an initial reference point cloud and a candidate reference point cloud according to an exemplary embodiment of the present application, as shown in fig. 4, to be greater than an intensity threshold Ths r And (3) taking the range of the initial reference object point cloud as a first intensity threshold range, and screening to obtain the initial reference object point cloud. Will be less than the intensity threshold Ths r And greater than an intensity threshold of 2 x ths avg Is used as a second intensity threshold range, candidate reference object point clouds, ths are obtained through screening avg Is the average value of the reflection intensity of the non-reference object point cloud.
For another example, the extraction process of the initial reference point cloud and the candidate reference point cloud may further be: analyzing the distribution of the reflection intensity of each point in the current point cloud data to obtain the point cloud intensity distribution characteristics corresponding to the current point cloud data, for example, counting the reflection intensity of each point in the current point cloud data, and taking the number of points corresponding to each reflection intensity value as the point cloud intensity distribution characteristics corresponding to the current point cloud data. And then, dividing points in the current point cloud data according to the point cloud intensity distribution characteristics to obtain an initial reference point cloud and a candidate reference point cloud.
Step S330: and verifying whether the candidate reference point cloud is a point cloud reflected by the reference object by utilizing the position of the reference object and the position of the candidate reference point cloud in the reference object tracking record, and taking the verified candidate reference point cloud as a supplementary reference point cloud.
The candidate reference point cloud may be a point cloud reflected by the reference or a point cloud reflected by the non-reference, and thus it is necessary to further verify whether the candidate reference point cloud is a point cloud reflected by the reference or not according to the reference tracking record.
And verifying whether the candidate reference object point cloud is the point cloud reflected by the reference object according to the position of the reference object in the reference object tracking record and the position of the candidate reference object point cloud.
For example, the position of the reference object recorded in the reference object tracking record may be a coordinate under a reference coordinate system, for example, a coordinate under a map coordinate system of the reference object, the position of the candidate reference object point cloud may be a coordinate obtained by projecting the candidate reference object point cloud into the map coordinate system, the range of the field corresponding to the reference object is determined by the coordinate under the map coordinate system of the reference object in the reference object tracking record, if the position of the candidate reference object point cloud is within the range of the field, the candidate reference object point cloud is considered as the point cloud reflected by the reference object, otherwise, the candidate reference object point cloud is considered as the point cloud reflected by the reference object.
For another example, the position of the reference object recorded in the reference object tracking record may be a positional relationship between the reference object and the movable robot, such as a distance and a direction of the reference object with respect to the movable robot, and the predicted distance range and the predicted direction range may be obtained by predicting the distance and the direction of the reference object with respect to the movable robot at the current time based on the distance and the direction of the reference object with respect to the movable robot at the historical time in the reference object tracking record and based on pose change information of the movable robot between the historical time and the current time. And then, taking the distance and the direction of the candidate reference object point cloud relative to the movable robot as the position of the candidate reference object point cloud, judging whether the distance and the direction of the candidate reference object point cloud relative to the movable robot are in a predicted distance range and a predicted direction range, if so, considering the candidate reference object point cloud as the point cloud reflected by the reference object, and otherwise, considering the candidate reference object point cloud as the point cloud not reflected by the reference object.
If the candidate reference object point cloud is the point cloud reflected by the reference object, the candidate reference object point cloud passes verification, and the candidate reference object point cloud is taken as the supplementary reference object point cloud; if the candidate reference point cloud is not the point cloud reflected by the reference, the candidate reference point cloud is not verified and is not taken as the supplementary reference point cloud.
Step S340: and calculating the current pose information of the reference object by combining the initial reference object point cloud and the supplementary reference object point cloud, and navigating the movable robot according to the current pose information of the reference object.
Combining the initial reference object point cloud and the supplementary reference object point cloud to obtain more complete reference object point cloud data, further calculating more accurate current pose information of the reference object, and improving the accuracy of navigating the movable robot according to the current pose information of the reference object.
In some embodiments, the method further comprises: and updating the reference object tracking record according to the current pose information of the reference object. The position of the reference object in the reference object tracking record is updated according to the current pose information of the reference object.
For example, the position of the reference object recorded in the reference object tracking record may be a coordinate in the reference coordinate system, and the current coordinate of the reference object in the reference coordinate system may be obtained according to the current pose information of the reference object. The original coordinates of the reference object in the reference object tracking record can be updated to the current coordinates; alternatively, the coordinates of the reference object at a plurality of moments in the preset time period may be recorded, a coordinate set of the reference object is obtained, the position of the reference object is obtained according to the coordinate set of the reference object, for example, a coordinate average value in the coordinate set is calculated, the calculated coordinate average value is used as the position of the reference object, the current coordinates of the reference object obtained according to the current point cloud data are added to the coordinate set of the reference object, and the coordinate average value is updated.
For example, the position of the reference object recorded in the reference object tracking record may be a positional relationship between the reference object and the movable robot, such as a distance and a direction of the reference object relative to the movable robot, and a current distance and a current direction of the reference object relative to the movable robot obtained by recording current pose information of the reference object.
For example, the reference object is composed of a plurality of light reflecting modules arranged at intervals. As shown in fig. 5, the reference object is composed of a rectangular flat plate and an odd number of reflective strips adhered on the flat plate, wherein the width of the reflective strips and the distance between the centers of two adjacent reflective strips can be flexibly set.
Optionally, the center point of the middle reflecting strip can be used as the center of the reference object, so that the reduction of the identification accuracy of the reference object caused by the inaccuracy of boundary extraction of the edge reflecting strip can be effectively avoided when the incidence angle of laser light and the reference object is smaller.
Referring to fig. 6, fig. 6 is a flowchart of updating a reference tracking record, as shown in fig. 6, of an exemplary embodiment of the present application, point cloud extraction is performed on current point cloud data to obtain an initial reference point cloud and a candidate reference point cloud, and for the candidate reference point cloud, verification is performed on the candidate reference point cloud according to the reference tracking record to determine a point cloud reflected by a reference in the candidate reference point cloud, and the candidate reference point cloud passing the verification is used as a supplementary reference point cloud. And combining the initial reference object point cloud and the supplementary reference object point cloud to perform reflection module aggregation to obtain a reflection module, performing reference object pose recognition according to the reflection module obtained by aggregation to obtain current pose information of the reference object, updating a reference object tracking record according to the current pose information of the reference object, and navigating the movable robot according to the current pose information of the reference object.
For example, the current pose information contains the location of identified reflector modules, which are determined from an initial reference point cloud and a supplemental reference point cloud. If the point cloud clustering is performed according to the initial reference point cloud and the supplementary reference point cloud to obtain clustering results, each clustering result corresponds to one reflective strip, coordinates of point cloud data corresponding to each reflective strip obtained by clustering can be obtained, so that average coordinates of the point cloud data corresponding to each reflective strip can be calculated respectively, positions corresponding to each reflective strip can be obtained, and positions of the identified reflective modules can be obtained.
The reference object tracking record contains the position of the tracked reflecting module, the position of the reflecting module to be tracked and the observation times record corresponding to the reflecting module to be tracked. The number of observations of the light reflecting module refers to the number of observations of the light reflecting module in point cloud data acquired at different time points, for example, in fig. 5, the light reflecting module includes a first light reflecting strip, a second light reflecting strip and a third light reflecting strip, where the number of observations corresponding to the first light reflecting strip is 5, the number of observations corresponding to the second light reflecting strip is 3, and the number of observations corresponding to the third light reflecting strip is 1.
The method comprises the steps of periodically carrying out laser scanning on an environment containing a reference object through a movable robot, analyzing point cloud data obtained by each scanning to determine an observed reflecting module corresponding to each point cloud data, tracking and recording the position of the observed reflecting module, recording the observation times of the same reflecting module, dividing the observed reflecting module into a tracked reflecting module and a reflecting module to be tracked according to the recorded observation times, and converting the reflecting module to be tracked, of which the observation times are greater than a preset time threshold, into the tracked reflecting module.
Specifically, referring to fig. 7, updating a reference tracking record according to current pose information of a reference includes: calculating the distance between the positions of the identified reflecting modules and the positions of the tracked reflecting modules in the reference object tracking record respectively; if the distance between the identified light reflecting module and the tracked light reflecting module is not in the second distance threshold range and the distance between the identified light reflecting module and the light reflecting module to be tracked is not in the third distance threshold range, adding the identified light reflecting module as the light reflecting module to be tracked into a reference object tracking record, and initializing an observation frequency record corresponding to the identified light reflecting module; if the distance between the identified light reflecting module and the tracked light reflecting module is not in the second distance threshold range and the distance between the identified light reflecting module and the light reflecting module to be tracked is in the third distance threshold range, updating the observation times record corresponding to the light reflecting module to be tracked; and converting the reflection module to be tracked, of which the observation frequency record is larger than a preset frequency threshold value, into a tracked reflection module.
For example, the position of the light reflecting module recorded in the reference object tracking record is the coordinate of the light reflecting module under the map coordinate system, and the coordinates of all points corresponding to the identified light reflecting module in the current point cloud data are obtained to calculate the average coordinates of the points. And then, the calculated average coordinates are projected into a map coordinate system by combining the pose of the movable robot, the pose of the laser scanning device and the like at the moment of collecting the current point cloud data, so that the position of the identified reflecting module is obtained.
For each identified light reflecting module, firstly calculating the distance between the identified light reflecting module and each tracked light reflecting module, if the distance between the identified light reflecting module and at least one tracked light reflecting module is within a second distance threshold range, if the distance between the identified light reflecting module and at least one tracked light reflecting module is smaller than a preset distance threshold Ths d The identified retroreflective module is considered to have a tracking record that can perform the verification service, and the updating process may not be performed, or the location of the tracked retroreflective module with the smallest distance may be updated only according to the location of the identified retroreflective module.
If the distance between the identified light reflecting module and each tracked light reflecting module is not within the second distance threshold range, if the distance between the identified light reflecting module and each tracked light reflecting module is not less than the preset distance threshold Ths d The distance between the identified reflector module and each of the reflector modules to be tracked is further calculated. If the distance between the identified light reflecting module and each light reflecting module to be tracked is not within the third distance threshold range, such as between the identified light reflecting module and each light reflecting module to be trackedIs not less than a preset distance threshold value Ths d And if the identified reflecting module does not have a corresponding tracking record, the identified reflecting module can be used as the reflecting module to be tracked to be added into the tracking record of the reference object, and the corresponding observation frequency record of the identified reflecting module is initialized, for example, the initialized observation frequency record is 1. Otherwise, if the distances between the identified light reflecting module and the at least one light reflecting module to be tracked are within the third distance threshold range, if the distance between the identified light reflecting module and the light reflecting module to be tracked with the smallest distance is smaller than the preset distance threshold Ths d And if the identified reflecting module has corresponding tracking records, but the corresponding observation times are less, so that the data accuracy is not high and verification service cannot be executed, the observation times record corresponding to the reflecting module to be tracked with the minimum distance is updated only, and if the observation times record corresponding to the reflecting module to be tracked with the minimum distance is increased by 1. Meanwhile, the reflection module to be tracked, of which the observation frequency record is larger than a preset frequency threshold value, is converted into the tracked reflection module, so that accurate application of tracking data is realized.
Further, in combination with the above embodiment, verifying whether the candidate reference point cloud is a point cloud reflected by the reference object by using the position of the reference object in the reference object tracking record and the position of the candidate reference point cloud includes: calculating the distance between the position of each tracked reflecting module in the reference object tracking record and the position of the candidate reference object point cloud; taking the tracked reflecting module with the smallest distance between the tracked reflecting module and the candidate reference object point cloud as a target tracking reflecting module; and if the distance between the candidate reference object point cloud and the target tracking reflection module is within the first distance threshold range, judging the candidate reference object point cloud as the point cloud reflected by the target tracking reflection module.
Traversing the positions of all the tracked reflecting modules in the reference object tracking record to calculate the distance between the positions of the candidate reference object point clouds and the positions of all the tracked reflecting modules, determining the tracked reflecting module with the smallest distance between the candidate reference object point clouds as the target tracking reflecting module, and if the minimum distance is within a first distance threshold range, if the minimum distance is smaller than a preset distanceThreshold value Ths d And judging the candidate reference object point cloud as the point cloud reflected by the target tracking reflection module, and associating the candidate reference object point cloud with the target tracking reflection module.
Wherein, the distance may be a Euclidean distance.
When verifying whether the candidate reference object point cloud is the point cloud reflected by the reference object by using the reference object tracking record, the verification is performed only according to the position of the tracked reflection module recorded in the reference object tracking record, so that the verification accuracy is improved.
In some embodiments, calculating current pose information of a reference in combination with an initial reference point cloud and a supplemental reference point cloud includes: fusing the initial reference object point cloud and the supplementary reference object point cloud to obtain a reference object point cloud; extracting reference line characteristics of the reference object target point cloud to obtain reference line characteristics corresponding to the reference object target point cloud; calculating current position information and current posture information of a reference object by utilizing the reference line characteristics; and obtaining the current pose information of the reference object according to the current position information and the current pose information of the reference object.
The reference object is provided in a different shape, and the reference line feature is extracted in a different manner.
Taking the reference object with the distance of fig. 5 as an example, fusing an initial reference object point cloud and a supplementary reference object point cloud to obtain a reference object target point cloud, clustering the reference object target point cloud, wherein each clustering result corresponds to one reflection bar, and extracting straight line characteristics from point cloud data obtained by clustering each reflection bar to obtain a straight line equation ax+by+c=0 corresponding to the point cloud data of each reflection bar, wherein [ a, b, c ]Is the extracted linear equation parameter. Let the point cloud data of the clustered reflective stripes contain points characterized by { p } m ,…,p n Each point coordinate is expressed as (p) ix ,p iy ) I is greater than or equal to m and less than or equal to n, and the points are projected to a linear equation corresponding to the point cloud data of the reflection strip to obtain projection point cloud { p' m ,…,p′ n And (c), wherein the coordinates of the ith point can be expressed as (p' i_x ,p′ i_y ) The calculation formula can be:
Then, according to the projection point cloud { p' m ,…,p′ n Current position information of a reference object And current posture information->The calculation formula can be:
combining the current position information and the current posture information of the reference object to obtain the current posture information of the reference object as
Further, the mobile robot is navigated according to the current pose information of the reference object.
In some embodiments, navigating the mobile robot according to current pose information of the reference comprises: acquiring an initial navigation path; correcting the initial navigation path by using the current pose information of the reference object to obtain a corrected navigation path; and navigating the movable robot according to the corrected navigation path.
The method comprises the steps of obtaining an environment map under a map coordinate system, obtaining initial pose information of a reference object in the environment map, determining a navigation end point according to the initial pose information, and then planning a path according to the navigation end point and a starting point of a movable robot to obtain an initial navigation path. The initial pose information may be obtained by performing laser radar scanning at the preamble moment, or may be marked in advance in the environment map, which is not limited in the present application.
Then, in the moving process of the movable robot according to the initial navigation path, the current pose information of the reference object at the current moment needs to be scanned continuously, the initial navigation path is corrected according to the current pose information of the reference object, the corrected navigation path is obtained, and the movable robot is navigated according to the corrected navigation path until the navigation reaches the navigation end point.
Taking a navigation process corresponding to the execution of the docking service of the intelligent transfer robot as an example for explanation: as shown in fig. 8, a docking path is preset according to an environment map, where the docking path includes a library position identification start point, a library position identification end point, and an initial navigation end point, and the initial navigation end point is a library position point coinciding with the reference object point. The intelligent carrying robot arrives at the library position identification starting point in the moving process and is reduced to a preset speed, the environment containing the reference object is periodically scanned according to a preset frequency in the path between the library position identification starting point and the library position identification end point, and the current pose information of the reference object is calculated on the current point cloud data obtained by each scanning by adopting the embodiment, so that the current pose information of the reference object is obtained as followsSimultaneously combining the pose of the intelligent transfer robot in the environment map at the corresponding moment >Mounting external parameter of laser scanning device>The calculated current pose information of the reference object can be projected to obtain pose representation +.>
Then, willAs a starting point, will +.>And as updating the navigation terminal, re-planning the path to obtain a corrected navigation path, and navigating the movable robot according to the corrected navigation path until the navigation path exceeds the library position identification terminal or the docking service is completed.
The library site recognition process is guaranteed to be carried out between a library site recognition starting point and a library site recognition end point, so that the defect that when the intelligent transfer robot does not reach the library site recognition starting point yet, the library site recognition precision is insufficient, and when the intelligent transfer robot exceeds the library site recognition end point, the generated path cannot be executed due to too close distance to the library site.
According to the navigation method of the movable robot, the current point cloud data is obtained by carrying out laser scanning on the environment containing the reference object, and the tracking record of the reference object is obtained; the reference object tracking record is used for recording the position of a reference object obtained according to the historical point cloud data, and the acquisition time of the historical point cloud data is earlier than the acquisition time of the current point cloud data; performing point cloud extraction on the current point cloud data according to the reflection intensity information of the current point cloud data to obtain an initial reference point cloud and a candidate reference point cloud; verifying whether the candidate reference point cloud is a point cloud reflected by the reference object or not by utilizing the position of the reference object and the position of the candidate reference point cloud in the reference object tracking record, and taking the verified candidate reference point cloud as a supplementary reference point cloud; and calculating the current pose information of the reference object by combining the initial reference object point cloud and the supplementary reference object point cloud, and navigating the movable robot according to the current pose information of the reference object. The condition that the extraction of the reference object point cloud is inaccurate due to poor scanning angle or distance during laser scanning is avoided, more complete reference object point cloud data are obtained, further more accurate current pose information of the reference object is obtained through calculation, and the accuracy of navigation of the movable robot according to the current pose information of the reference object is improved.
Fig. 9 is a block diagram of a navigation device of a mobile robot shown in an exemplary embodiment of the present application. As shown in fig. 9, the navigation device 900 of the exemplary mobile robot includes: a data acquisition module 910, a point cloud extraction module 920, a verification module 930, and a navigation module 940. Specifically:
the data acquisition module 910 is configured to perform laser scanning on an environment containing a reference object to obtain current point cloud data, and acquire a tracking record of the reference object; the reference object tracking record is used for recording the position of a reference object obtained according to the historical point cloud data, and the acquisition time of the historical point cloud data is earlier than the acquisition time of the current point cloud data;
the point cloud extraction module 920 is configured to perform point cloud extraction on the current point cloud data according to the reflection intensity information of the current point cloud data, so as to obtain an initial reference object point cloud and a candidate reference object point cloud;
a verification module 930, configured to verify whether the candidate reference point cloud is a point cloud reflected by the reference by using the position of the reference and the position of the candidate reference point cloud in the reference tracking record, and use the verified candidate reference point cloud as a supplementary reference point cloud;
the navigation module 940 is configured to calculate current pose information of the reference object in combination with the initial reference object point cloud and the supplementary reference object point cloud, and navigate the movable robot according to the current pose information of the reference object.
In the above-mentioned exemplary device fault handling apparatus, the situation that the extraction of the reference object point cloud is inaccurate due to the poor scanning angle or distance during the laser scanning may be avoided, so as to obtain more complete reference object point cloud data, further calculate more accurate current pose information of the reference object, and improve the accuracy of navigating the mobile robot according to the current pose information of the reference object.
Referring to fig. 10, fig. 10 is a schematic structural diagram of an embodiment of a mobile robot according to the present application. The mobile robot 1000 includes a laser scanning device 1001 and a controller 1002, where the laser scanning device 1002 is configured to perform laser scanning on an environment containing a reference object to obtain current point cloud data, and the controller 1002 is configured to perform the steps in any of the above-described navigation method embodiments of the mobile robot.
Among other things, the controller 1002 may also be referred to as a central processing unit (Central Processing Unit, CPU). The controller 1002 may be an integrated circuit chip having signal processing capabilities. The controller 1002 may also be a general purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a Field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, a discrete gate or transistor logic device, a discrete hardware component. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the controller 1002 may be commonly implemented by an integrated circuit chip.
Referring to fig. 11, fig. 11 is a schematic structural diagram of an embodiment of a computer readable storage medium of the present application. The computer readable storage medium 1100 stores program instructions 1110 that can be executed by a processor, the program instructions 1110 being for implementing the steps in the navigation method embodiment of any one of the mobile robots described above.
In some embodiments, functions or modules included in an apparatus provided by the embodiments of the present disclosure may be used to perform a method described in the foregoing method embodiments, and specific implementations thereof may refer to descriptions of the foregoing method embodiments, which are not repeated herein for brevity.
The foregoing description of various embodiments is intended to highlight differences between the various embodiments, which may be the same or similar to each other by reference, and is not repeated herein for the sake of brevity.
In the several embodiments provided in the present application, it should be understood that the disclosed methods and apparatus may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical, or other forms.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units. The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all or part of the technical solution contributing to the prior art or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.

Claims (10)

1. A method of navigating a mobile robot, comprising:
performing laser scanning on an environment containing a reference object to obtain current point cloud data, and acquiring a tracking record of the reference object; the reference object tracking record is used for recording the position of the reference object obtained according to historical point cloud data, and the acquisition time of the historical point cloud data is earlier than the acquisition time of the current point cloud data;
performing point cloud extraction on the current point cloud data according to the reflection intensity information of the current point cloud data to obtain an initial reference point cloud and a candidate reference point cloud;
verifying whether the candidate reference object point cloud is a point cloud reflected by the reference object by utilizing the position of the reference object and the position of the candidate reference object point cloud in the reference object tracking record, and taking the verified candidate reference object point cloud as a supplementary reference object point cloud;
and calculating the current pose information of the reference object by combining the initial reference object point cloud and the supplementary reference object point cloud, and navigating the movable robot according to the current pose information of the reference object.
2. The method of claim 1, wherein verifying whether the candidate reference point cloud is a point cloud reflected by the reference using the position of the reference and the position of the candidate reference point cloud in the reference tracking record comprises:
Calculating a distance between the position of the candidate reference point cloud and the position of the reference in the reference tracking record;
and if the distance is within a first distance threshold range, judging that the candidate reference object point cloud is the point cloud reflected by the reference object.
3. The method of claim 2, wherein prior to said calculating the distance between the location of the candidate reference point cloud and the location of the reference in the reference tracking record, the method further comprises:
acquiring a coordinate system corresponding to the position of the reference object in the reference object tracking record, and obtaining a reference coordinate system;
and projecting the candidate reference object point cloud to the reference coordinate system to obtain the position of the candidate reference object point cloud.
4. The method according to claim 1, wherein the performing point cloud extraction on the current point cloud data according to the reflection intensity information of the current point cloud data to obtain an initial reference point cloud and a candidate reference point cloud includes:
acquiring a first intensity threshold range and a second intensity threshold range;
extracting points with the reflection intensity in the first intensity threshold range from the current point cloud data to obtain an initial reference object point cloud;
And extracting points with the reflection intensity in the second intensity threshold range from the current point cloud data to obtain candidate reference point clouds.
5. The method of claim 1, wherein said calculating current pose information of said reference object in combination with said initial reference object point cloud and said supplemental reference object point cloud comprises:
fusing the initial reference object point cloud and the supplementary reference object point cloud to obtain a reference object point cloud;
extracting reference line characteristics from the reference object target point cloud to obtain reference line characteristics corresponding to the reference object target point cloud;
calculating current position information and current posture information of the reference object by utilizing the reference line characteristics;
and obtaining the current pose information of the reference object according to the current position information and the current pose information of the reference object.
6. The method of claim 1, wherein navigating the mobile robot based on current pose information of the reference comprises:
acquiring an initial navigation path;
correcting the initial navigation path by utilizing the current pose information of the reference object to obtain a corrected navigation path;
And navigating the movable robot according to the corrected navigation path.
7. The method according to any one of claims 1 to 6, further comprising:
and updating the reference object tracking record according to the current pose information of the reference object.
8. The method of claim 7, wherein the reference object is composed of a plurality of reflective modules arranged at intervals, the current pose information contains the positions of the identified reflective modules, and the reference object tracking record contains the positions of the tracked reflective modules, the positions of the reflective modules to be tracked and the observation times records corresponding to the reflective modules to be tracked; the updating the reference object tracking record according to the current pose information of the reference object comprises the following steps:
calculating distances between the positions of the identified reflecting modules and the positions of each tracked reflecting module and the positions of reflecting modules to be tracked in the reference object tracking record respectively;
if the distance between the identified light reflecting module and the tracked light reflecting module is not in the second distance threshold range and the distance between the identified light reflecting module and the light reflecting module to be tracked is not in the third distance threshold range, adding the identified light reflecting module serving as the light reflecting module to be tracked into the reference object tracking record, and initializing the observation times record corresponding to the identified light reflecting module;
If the distance between the identified light reflecting module and the tracked light reflecting module is not in the second distance threshold range and the distance between the identified light reflecting module and the light reflecting module to be tracked is in the third distance threshold range, updating the observation times record corresponding to the light reflecting module to be tracked;
and converting the reflection module to be tracked, of which the observation frequency record is larger than a preset frequency threshold value, into a tracked reflection module.
9. The method of claim 8, wherein verifying whether the candidate reference point cloud is a point cloud reflected by the reference using the position of the reference and the position of the candidate reference point cloud in the reference tracking record comprises:
calculating the distance between the position of each tracked reflecting module in the reference object tracking record and the position of the candidate reference object point cloud;
taking the tracked reflecting module with the smallest distance between the tracked reflecting module and the candidate reference object point cloud as a target tracking reflecting module;
and if the distance between the candidate reference object point cloud and the target tracking reflection module is within a first distance threshold range, judging that the candidate reference object point cloud is the point cloud reflected by the target tracking reflection module.
10. A mobile robot comprising:
the laser scanning device is used for carrying out laser scanning on the environment containing the reference object to obtain current point cloud data;
a controller for implementing the navigation method of a mobile robot according to any one of the preceding claims 1 to 9.
CN202311528329.0A 2023-11-15 2023-11-15 Navigation method of movable robot and movable robot Pending CN117519164A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311528329.0A CN117519164A (en) 2023-11-15 2023-11-15 Navigation method of movable robot and movable robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311528329.0A CN117519164A (en) 2023-11-15 2023-11-15 Navigation method of movable robot and movable robot

Publications (1)

Publication Number Publication Date
CN117519164A true CN117519164A (en) 2024-02-06

Family

ID=89760299

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311528329.0A Pending CN117519164A (en) 2023-11-15 2023-11-15 Navigation method of movable robot and movable robot

Country Status (1)

Country Link
CN (1) CN117519164A (en)

Similar Documents

Publication Publication Date Title
KR101003168B1 (en) Multidimensional Evidence Grids and System and Methods for Applying Same
US9046893B2 (en) Deep lane navigation system for automatic guided vehicles
CN110837814B (en) Vehicle navigation method, device and computer readable storage medium
US12014320B2 (en) Systems, devices, and methods for estimating stock level with depth sensor
CN112363158B (en) Pose estimation method for robot, robot and computer storage medium
US7696894B2 (en) Method for determining a relative position of a mobile unit by comparing scans of an environment and mobile unit
US20210039257A1 (en) Workpiece picking device and workpiece picking method
CN114236564B (en) Method for positioning robot in dynamic environment, robot, device and storage medium
CN109100744B (en) Target positioning method and system for AGV
CN113759906B (en) Vehicle alignment method and device, computer equipment and storage medium
WO2023005384A1 (en) Repositioning method and device for mobile equipment
CN110850859A (en) Robot and obstacle avoidance method and obstacle avoidance system thereof
CN112673280A (en) Road detection method for a motor vehicle equipped with a LIDAR sensor
CN113768419B (en) Method and device for determining sweeping direction of sweeper and sweeper
US11592826B2 (en) Method, system and apparatus for dynamic loop closure in mapping trajectories
CN117519164A (en) Navigation method of movable robot and movable robot
CN116795103A (en) Robot movement control method, system and device and robot
CN114661048A (en) Mobile robot docking method and device and electronic equipment
CN112630745B (en) Laser radar-based environment mapping method and device
CN116443012B (en) Tractor, docking method of side-by-side towed targets and electronic equipment
CN116342858B (en) Object detection method, device, electronic equipment and storage medium
CN116953719A (en) Target positioning method, target butting method and target butting device
CN113625296B (en) Robot positioning method and device based on reflector and robot
CN115035425B (en) Target recognition method, system, electronic equipment and storage medium based on deep learning
CN116539026B (en) Map construction method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination