CN116580215A - Robot and repositioning method, device and storage medium thereof - Google Patents

Robot and repositioning method, device and storage medium thereof Download PDF

Info

Publication number
CN116580215A
CN116580215A CN202310446731.8A CN202310446731A CN116580215A CN 116580215 A CN116580215 A CN 116580215A CN 202310446731 A CN202310446731 A CN 202310446731A CN 116580215 A CN116580215 A CN 116580215A
Authority
CN
China
Prior art keywords
point cloud
robot
repositioning
pose
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310446731.8A
Other languages
Chinese (zh)
Inventor
韦和钧
焦继超
赖有仿
温焕宇
毕占甲
何婉君
熊金冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ubtech Robotics Corp
Original Assignee
Ubtech Robotics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ubtech Robotics Corp filed Critical Ubtech Robotics Corp
Priority to CN202310446731.8A priority Critical patent/CN116580215A/en
Publication of CN116580215A publication Critical patent/CN116580215A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/35Categorising the entire scene, e.g. birthday party or wedding scene
    • G06V20/36Indoor scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Automation & Control Theory (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Electromagnetism (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the field of robot positioning, and provides a robot, a repositioning method, a repositioning device and a storage medium thereof. The method comprises the following steps: acquiring a first point cloud of a scene where the robot is located; extracting a feature descriptor in the first point cloud, and primarily repositioning the robot according to the feature descriptor to determine the primary pose of the robot; determining a second point cloud located within the preliminary pose preset range; and determining a matching error of the first point cloud and the second point cloud by a point cloud matching method, and determining the repositioning pose of the robot according to the matching error. After the initial pose is determined through the feature descriptors, point cloud matching is performed through the second point cloud determined by the initial pose, so that the calculated amount of point cloud matching is reduced, the point cloud matching efficiency is improved, and meanwhile, the repositioning precision is improved.

Description

Robot and repositioning method, device and storage medium thereof
Technical Field
The present application relates to the field of robot positioning, and in particular, to a robot, a repositioning method, a repositioning device, and a storage medium thereof.
Background
During the task execution of the robot, the robot positioning may be lost due to certain noise in the sensor data and the motion noise of the robot. When the robot positioning is lost, the robot needs to be repositioned to determine the current latest position of the robot in order for the robot to be able to perform the task effectively.
In repositioning robots, the manner based on feature matching mainly includes such manners as point cloud matching, descriptor matching or semantic information matching. Under the condition that the indoor environment and the outdoor environment are complex, the positioning efficiency and the positioning robustness based on feature matching are not high, and the positioning accuracy of the robot is not improved.
Disclosure of Invention
In view of the above, the embodiment of the application provides a robot, a repositioning method, a repositioning device and a storage medium thereof, which are used for solving the problems that in the prior art, under the condition of complex indoor and outdoor environments, the positioning efficiency and the positioning robustness based on feature matching are not high, and the positioning accuracy of the robot is not improved.
A first aspect of an embodiment of the present application provides a repositioning method for a robot, the method including:
acquiring a first point cloud of a scene where the robot is located;
extracting a feature descriptor in the first point cloud, and primarily repositioning the robot according to the feature descriptor to determine the primary pose of the robot;
determining a second point cloud located within the preliminary pose preset range;
and determining a matching error of the first point cloud and the second point cloud by a point cloud matching method, and determining the repositioning pose of the robot according to the matching error.
With reference to the first aspect, in a first possible implementation manner of the first aspect, the determining a repositioning pose of the robot according to the matching error includes:
representing the first point cloud and the second point cloud by an occupancy probability grid in case a minimum value of a matching error of the first point cloud and the second point cloud is larger than a predetermined first error threshold;
and matching the second point cloud with the first point cloud through a branch-and-bound method, and determining the repositioning pose of the robot according to the condition that the sum of the occupation probability scores of the grid areas in the second point cloud matched with the first point cloud is maximum.
With reference to the first aspect, in a second possible implementation manner of the first aspect, the determining the repositioning pose of the robot according to a case that a sum of occupancy probability scores of grid areas in a second point cloud matched by the first point cloud is largest includes:
determining a first pose of the robot under the condition that the sum of the occupation probability scores of the grid areas in the second point cloud matched with the first point cloud is maximum;
and under the condition that the matching error of the first pose and the pose corresponding to the first point cloud is larger than a preset second error threshold, determining the repositioning pose of the robot by a violent matching method.
With reference to the first aspect, in a third possible implementation manner of the first aspect, the extracting a feature descriptor in the first point cloud includes:
performing radial division and circumferential division on the first point cloud to obtain a first grid set, and/or performing horizontal division and vertical division on the first point cloud to obtain a second grid set;
and determining a first characteristic description submatrix corresponding to the first grid set and a second characteristic description submatrix corresponding to the second grid set according to the descriptors, wherein the maximum value of the point cloud in the grid in the vertical horizontal direction is used as the descriptors of the grid.
With reference to the third possible implementation manner of the first aspect, in a fourth possible implementation manner of the first aspect, performing preliminary repositioning on the robot according to the feature descriptor includes:
compressing the first feature description submatrix and/or the second feature description submatrix into one-dimensional features;
according to the one-dimensional characteristics of the first point cloud and the one-dimensional characteristics of the key frames in the map building, matching, and determining candidate key frames;
and matching the candidate key frame with the first feature description submatrix and/or the second feature description submatrix of the first point cloud, and performing preliminary repositioning on the robot.
With reference to the fourth possible implementation manner of the first aspect, in a fifth possible implementation manner of the first aspect, compressing the first feature description sub-matrix and/or the second feature description sub-matrix into one-dimensional features includes:
determining the one-dimensional feature according to the sum value or the average value of descriptors of each row in the first feature description sub-matrix and/or the second feature description sub-matrix;
or determining the one-dimensional feature according to the sum value or the average value of descriptors of each column in the first feature description sub-matrix and/or the second feature description sub-matrix.
With reference to any one of the fifth possible implementation manners of the first aspect, in a sixth possible implementation manner of the first aspect, performing preliminary repositioning on the robot according to the feature descriptor includes:
determining a local key frame according to the initial pose of the robot;
and performing preliminary repositioning on the robot according to the feature descriptors of the local key frames and the feature descriptors of the first point cloud.
A second aspect of an embodiment of the present application provides a repositioning device for a robot, the device comprising:
the point cloud acquisition unit is used for acquiring a first point cloud of a scene where the robot is located;
the initial pose determining unit is used for extracting feature descriptors in the first point cloud, and performing initial repositioning on the robot according to the feature descriptors to determine the initial pose of the robot;
a second point cloud determining unit for determining a second point cloud located within the predetermined range of the preliminary pose;
and the repositioning pose determining unit is used for determining the matching error of the first point cloud and the second point cloud through a point cloud matching method and determining the repositioning pose of the robot according to the matching error.
A third aspect of an embodiment of the application provides a robot comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method according to any one of the first aspects when executing the computer program.
A fourth aspect of the embodiments of the present application provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of the method according to any of the first aspects.
Compared with the prior art, the embodiment of the application has the beneficial effects that: according to the embodiment of the application, the initial pose of the first point cloud matching of the scene where the robot is located can be obtained efficiently through the feature descriptor matching, and the point cloud matching is performed based on the initial pose, so that the matching precision of the initial pose and the robot pose can be obtained, the repositioning pose of the robot is determined through the matching error, the repositioning precision of the robot is improved, and the point cloud matching calculation of each key frame and the first point cloud is not required, so that the repositioning calculation amount is reduced, and the repositioning efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic implementation flow diagram of a repositioning method of a robot according to an embodiment of the present application;
FIG. 2 is a schematic diagram of grid division according to an embodiment of the present application;
FIG. 3 is a schematic diagram of still another grid division provided by an embodiment of the present application;
fig. 4 is a schematic implementation flow chart of a method for performing preliminary repositioning on a robot according to an embodiment of the present application;
FIG. 5 is a schematic view of a repositioning apparatus for a robot according to an embodiment of the present application;
fig. 6 is a schematic view of a robot according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to illustrate the technical scheme of the application, the following description is made by specific examples.
In the process of the robot executing the navigation task, the robot may have a situation of lost positioning due to the influence of the sensor precision or the motion precision, and the robot needs to be repositioned to acquire the positioning information of the robot again.
When the robot is repositioned, the point cloud currently acquired by the robot is typically matched with the point cloud of the historical key frames. When the point cloud matching algorithm is adopted for matching, the repositioning efficiency is low and the success rate is low due to the large calculation amount of the algorithm. When the semantic information is relocated, objects in the map are endowed with semantic labels and serve as reference information of relocation assistance, so that the success rate can be improved, but the method is limited in that the method cannot be widely applied to diversified scenes, robustness is insufficient, the robot can be erroneously relocated due to semantic misidentification, and large computer resources are required to be consumed for acquiring the semantic information. When repositioning is performed based on the descriptors, the effective characteristic information of the scene is difficult to describe completely, and when certain changes occur to the scene during the drawing, the success rate of repositioning is not high.
Based on this, the embodiment of the application provides a repositioning method for a robot, and an execution subject of the method can be the robot. As shown in fig. 1, the method includes:
in S101, a first point cloud of a scene where the robot is located is acquired.
The robot in the embodiment of the application can be a robot with an intelligent navigation function, and comprises a meal delivery robot, a sweeping robot, a patrol robot, a greeting robot, a disinfection robot and the like. The robot can acquire a first point cloud in a scene where the robot is located through the point cloud acquisition device. The point cloud acquisition means may comprise, for example, a lidar, a depth camera, etc.
When the first point cloud in the scene where the robot is located is collected, the first point cloud in the preset time interval can be obtained by collecting the first point cloud in the preset time interval. The time interval may be determined according to the speed at which the robot moves, and the extent of the scene's opening. The time interval may be shorter as the moving speed of the robot is faster, and may be longer as the moving speed of the robot is slower. The longer the scene is, the longer the time interval for acquiring the first point cloud may be, and the shorter the scene is, the shorter the time interval for acquiring the first point cloud may be.
In a possible implementation, the first point cloud may also be acquired from robot motion information. For example, the first point cloud may be collected when the distance that the robot moves relative to the last collection point reaches a predetermined distance threshold, or the first point cloud may be collected when the angle that the robot rotates relative to the last collection point reaches a predetermined angle threshold.
In a possible implementation, the first point cloud may also be determined from the number of features included in the acquired point cloud. For example, if the number of features included in the point cloud of the current frame reaches a predetermined number value, the point cloud of the current frame is taken as the first point cloud.
In S102, extracting a feature descriptor in the first point cloud, and performing preliminary repositioning on the robot according to the feature descriptor, so as to determine a preliminary pose of the robot.
Before point cloud matching is carried out, the first point cloud is matched with a feature descriptor of a key frame acquired in advance, and the robot is subjected to preliminary repositioning. Compared with the method based on point cloud matching, the method based on feature descriptor matching can greatly reduce the amount of calculation of matching, and is beneficial to efficiently determining the preliminary pose of the robot. Based on the matching mode of the feature descriptors, one or more key frames matched with the feature descriptors of the first point cloud of the current frame can be obtained from the pre-acquired key frames.
Before feature descriptor matching, the point cloud and the first point cloud of the key frame can be subjected to grid division to obtain feature descriptors corresponding to each grid, and a feature descriptor matrix is determined based on a plurality of grids. The method for performing grid division on the first point cloud or the point cloud of the key frame may include:
dividing mode one:
as shown in fig. 2, the first point cloud or the point cloud of the key frame may be truncated to a predetermined radius of the divided area, and the grid division may be performed in the divided area. The preset radius may be determined according to different scenarios. For example, for outdoor scenes, the predetermined radius may be any value from 30-100 meters. For example, a radius of 80 m may be set to obtain a divided region having a radius of 80 m or less. The predetermined number of grids may be divided in the radial direction and the circumferential direction according to the determined division area. In each grid, the maximum value in the z-axis direction of the point cloud in the grid, i.e. perpendicular to the horizontal direction, can be used as a feature descriptor of the grid. If the number of grids is r×q, the point cloud or the first point cloud of the whole key frame can be simply represented by a matrix of a corresponding scale, such as a feature descriptor matrix of r×q, and the feature descriptor matrix can be called a rotation descriptor matrix or a first feature descriptor matrix.
As shown in fig. 3, the first point cloud or the point cloud of the history key frame may be rasterized at a predetermined interval or a predetermined number in the horizontal direction and the vertical direction. For example, the number of horizontal directions and the number of vertical directions may be x and y, respectively. Accordingly, the maximum value in the z-axis direction of the point cloud in the grid, i.e., in the direction perpendicular to the horizontal plane, may be employed as a feature descriptor for the grid. The point cloud or first point cloud of the key frame may also be represented by an x y feature description sub-matrix. This feature description sub-matrix may be referred to as a lateral description sub-matrix or a second feature description sub-matrix.
The matching result may be determined by using a first feature description sub-matrix of the first point cloud and the historical key frame, or the matching result may be determined by using a second feature description sub-matrix of the first point cloud and the historical key frame, or the matching may be performed by using a first feature description sub-matrix of the first point cloud and the historical key frame, and the matching result may be determined by using a second feature description sub-matrix of the second point cloud and the historical key frame.
When the first feature description submatrix of the first point cloud and the history key frame is used for matching, and the second feature description submatrix of the second point cloud and the history key frame is used for matching and determining a matching result, wherein the matching result of any feature description submatrix (the first feature description submatrix or the second feature description submatrix) meets the requirement of a preset matching error, the feature description submatrix of the point cloud of the first point cloud and the point cloud of the key frame can be determined to be matched.
In order to further improve the matching efficiency of the feature descriptors, the embodiment of the present application may further perform preliminary repositioning on the robot by using a hierarchical matching manner as shown in fig. 4, which specifically includes:
in S401, the first and/or second feature description sub-matrices are compressed into one-dimensional features.
Because the first feature description submatrix or the second feature description submatrix is two-dimensional data, when the history key frame is matched with the first feature description submatrix of the first point cloud, two-dimensional data matching calculation is needed. In order to further improve the matching efficiency, the two-dimensional first feature descriptors and/or the two-dimensional second feature descriptors can be compressed into one-dimensional features, and matching is performed based on the one-dimensional features, so that the matching efficiency can be greatly improved.
Compressing the two-dimensional first and/or second feature descriptor sub-matrices into one-dimensional features may include:
and determining the one-dimensional feature according to the sum value or the average value of descriptors of each row in the first feature descriptor sub-matrix and/or the second feature descriptor sub-matrix. Or determining the one-dimensional feature according to the sum value or the average value of descriptors of each column in the first feature description sub-matrix and/or the second feature description sub-matrix.
For example, assuming that the first feature descriptor sub-matrix and/or the second feature descriptor sub-matrix is an x y matrix, y feature descriptors in each row may be compressed into one feature descriptor. The compressing means may include, for example, summing or averaging, to determine compressed feature descriptors of the y feature descriptors of the row. For example, by averagingIn this way, the feature descriptors after the feature descriptor compression of the first row are calculated as: s1= (S 11 +S 12 +…S 1i …+S 1y ) And/y. Wherein S1 is a first feature descriptor of the compressed one-dimensional matrix, S 1i And y is the number of columns of the first feature descriptor matrix or the second feature descriptor matrix.
In S402, candidate key frames are determined according to the matching between the one-dimensional features of the first point cloud and the one-dimensional features of the key frames in the map.
After the point clouds of the first point cloud and the historical key frames in the building map are compressed to obtain one-dimensional features, the first point cloud is matched with the one-dimensional features of the key frames in the building map, so that the key frames matched with the one-dimensional features of the first point cloud in the key frames in the building map can be rapidly determined and are called candidate key frames. Since the calculation of the candidate key frames only needs to compare the one-dimensional characteristics, the calculation efficiency of the candidate key frames can be greatly improved.
In S403, the candidate keyframe is matched with the first feature description sub-matrix and/or the second feature description sub-matrix of the first point cloud, and the robot is preliminarily repositioned.
After a key frame with the matching degree of the one-dimensional features meeting the requirement, namely a candidate key frame, is determined through the matching calculation of the one-dimensional features, the candidate key frame and the first point cloud can be matched with two-dimensional features (namely a first feature description submatrix or a second feature description submatrix), and the pose of the matched candidate key frame is determined to be the pose of the robot after preliminary repositioning when the matching requirement of the two-dimensional features is met based on the matching calculation of one or more candidate frames.
Because the basis of the matching of the two-dimensional features is the candidate key frame after the matching of the one-dimensional features, the matching calculation of the two-dimensional features on the key frame in each map can be omitted, the matching calculation amount can be reduced, and the preliminary repositioning efficiency can be improved.
In a possible implementation manner, the method and the device can adopt global repositioning or local repositioning according to the fuzzy initial pose determination of the robot when determining the key frame in the map construction. The blurred initial pose may mean that the robot may acquire positioning information, but the accuracy of the positioning information may not meet the accuracy requirement of repositioning.
When the fuzzy initial pose of the robot during repositioning can not be acquired, global repositioning can be adopted, and the repositioning method can be executed within the global map range. When the global repositioning initializes the global point cloud map, voxel filtering downsampling is firstly carried out on the map, the feature descriptors of the three-dimensional point cloud of the current frame are calculated, and then the key frames in the corresponding map are found by searching and matching the feature descriptors of the three-dimensional point cloud.
When the robot can determine the blurred initial pose by a positioning method such as GNNS (chinese, global navigation satellite system, english, global Navigation Satellite System) or the like, or by receiving a positioning set value, a local key frame can be determined for repositioning according to the blurred initial pose. And carrying out voxel filtering downsampling on the map determined by the local key frames, calculating a characteristic descriptor of the three-dimensional point cloud of the first point cloud of the current frame, searching and matching the characteristic descriptor of the three-dimensional point cloud to find a matched historical key frame, and carrying out preliminary repositioning on the robot to obtain the preliminary pose of the robot.
In S103, a second point cloud located within the predetermined range of the preliminary pose is determined.
After the initial pose of the robot is determined through the feature descriptors, a second point cloud can be determined based on the initial pose. For example, a point cloud determined by a predetermined radius may be selected as the second point cloud according to the preliminary pose of the robot as the center point. Of course, the first point cloud may be determined based on the initial pose in combination with other shapes.
Since the initial pose has a higher accuracy than the initial pose, the second point cloud may be determined based on the initial pose, and the accuracy of the point cloud of the local area determined relative to the initial pose is higher.
In S104, a matching error between the first point cloud and the second point cloud is determined by a point cloud matching method, and a repositioning pose of the robot is determined according to the matching error.
After the second point cloud is determined according to the initial pose, calculation of point cloud matching can be performed based on the second point cloud with higher precision. Because the object of the point cloud matching calculation is the second point cloud determined by the initial pose, matching calculation is not needed for all key frames in the map building, the matching precision is ensured, meanwhile, the calculated amount of the point cloud matching can be greatly reduced, and the repositioning efficiency of the robot is improved.
When the first point cloud and the second point cloud are subjected to matching calculation, the matching errors of different historical key frames and the first point cloud of the current key frame can be obtained, and the pose of the historical key frame with the smallest matching error can be selected as the repositioning pose of the robot.
In a possible implementation, the repositioning pose of the robot is determined according to the matching error, and it may also occur that the minimum value of the matching error is greater than a predetermined first error threshold. Under the condition, the matching degree of the first point cloud of the current pose of the robot and the point cloud of the historical key frame with the smallest matching error can not meet the preset matching degree requirement, and the repositioning pose of the robot can be further determined through a branch delimitation method.
For example, the first point cloud and the second point cloud may be rasterized according to the raster division method shown in fig. 2 or fig. 3, and the first point cloud and the second point cloud are represented by an occupancy probability grid, that is, the divided grids are represented by an occupancy concept. The probability value of the grid occupied can be updated according to the pose of each point in the local map coordinate system, namely the coordinate system corresponding to the second point cloud. Pixels in the sub-image (excluding pixels hit by the point) where the point corresponds to the line connecting the origins of the coordinate system are updated with the probability of idleness.
The occupancy probability grids of the first point cloud and the occupancy probability grids of the second point cloud can be matched, pose solving is converted into a nonlinear least square problem, and the pose which is matched with the point cloud of the current frame in the local point cloud and enables the occupancy probability of the corresponding grids to be maximum, namely the pose with the maximum summation of occupancy probability scores, is found to be the robot repositioning pose.
In a possible implementation manner, based on the situation that the sum of the occupation probability scores is maximum determined by the branch-and-bound method, the determined repositioning pose of the robot may further include:
and determining a first pose of the robot under the condition that the sum of the occupation probability scores of the grid areas in the second point cloud matched with the first point cloud is maximum. And calculating a matching error of the first pose and the current pose of the robot, namely the pose corresponding to the first point cloud. If the matching error is greater than a predetermined second error threshold, determining the repositioning pose of the robot by a violent matching method. And performing iterative matching on the first point cloud and the local second point cloud of the current frame for a plurality of times, and if the matching score reaches a preset score threshold or the matching error is smaller than a preset third error threshold, obtaining the pose of the first point cloud matching as the repositioning pose of the robot.
According to the embodiment of the application, the feature descriptors are used for preliminary positioning, the second point cloud is determined based on the preliminary positioning, and the second point cloud is subjected to point cloud matching with the first point cloud, so that the calculated amount of point cloud matching can be reduced, the repositioning efficiency is improved, and meanwhile, the repositioning precision is improved. When the feature descriptors are used for matching, the two-dimensional feature descriptor matrixes can be compressed into one-dimensional features for pre-matching, candidate key frames are determined, and the two-dimensional features are matched according to the candidate key frames, so that the method is beneficial to reducing the matching calculation amount of the two-dimensional features and improving the matching efficiency of the feature descriptors. When the point cloud matching fails, the repositioning pose of the robot can be determined based on the branch-and-bound method. And when the point cloud matching and the branch-and-bound method fail, matching calculation can be carried out on all the poses in the first point cloud and the second point cloud in a violent matching mode, so that the repositioning poses of the first point cloud matching can be obtained.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present application.
Fig. 5 is a schematic view of a repositioning device of a robot according to an embodiment of the present application. As shown in fig. 5, the apparatus includes:
a point cloud obtaining unit 501, configured to obtain a first point cloud of a scene where the robot is located;
a preliminary pose determining unit 502, configured to extract a feature descriptor in the first point cloud, perform preliminary repositioning on the robot according to the feature descriptor, and determine a preliminary pose of the robot;
a second point cloud determining unit 503 configured to determine a second point cloud located within the predetermined range of the preliminary pose;
and the repositioning pose determining unit 504 is configured to determine a matching error between the first point cloud and the second point cloud by using a point cloud matching method, and determine a repositioning pose of the robot according to the matching error.
The repositioning device of the robot shown in fig. 5 corresponds to the repositioning method of the robot shown in fig. 1.
Fig. 6 is a schematic view of a robot according to an embodiment of the present application. As shown in fig. 6, the robot 6 of this embodiment includes: a processor 60, a memory 61 and a computer program 62 stored in said memory 61 and executable on said processor 60, for example a repositioning program of the robot. The processor 60, when executing the computer program 62, implements the steps of the repositioning method embodiments of the respective robots described above. Alternatively, the processor 60, when executing the computer program 62, performs the functions of the modules/units of the apparatus embodiments described above.
Illustratively, the computer program 62 may be partitioned into one or more modules/units that are stored in the memory 61 and executed by the processor 60 to complete the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing a specific function for describing the execution of the computer program 62 in the robot 6.
The robot may include, but is not limited to, a processor 60, a memory 61. It will be appreciated by those skilled in the art that fig. 6 is merely an example of a robot 6 and is not meant to be limiting of the robot 6, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., the robot may also include input and output devices, network access devices, buses, etc.
The processor 60 may be a central processing unit (Central Processing Unit, CPU), other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field-programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 61 may be an internal storage unit of the robot 6, such as a hard disk or a memory of the robot 6. The memory 61 may be an external storage device of the robot 6, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the robot 6. Further, the memory 61 may also include both an internal memory unit and an external memory device of the robot 6. The memory 61 is used for storing the computer program and other programs and data required by the robot. The memory 61 may also be used for temporarily storing data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and the division of the modules or units, for example, is merely a logical functional division, and there may be additional divisions when actually implemented, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on this understanding, the present application may also be implemented by implementing all or part of the procedures in the methods of the above embodiments, and the computer program may be stored in a computer readable storage medium, where the computer program when executed by a processor may implement the steps of the respective method embodiments. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium may include content that is subject to appropriate increases and decreases as required by jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is not included as electrical carrier signals and telecommunication signals.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (10)

1. A method of repositioning a robot, the method comprising:
acquiring a first point cloud of a scene where the robot is located;
extracting a feature descriptor in the first point cloud, and primarily repositioning the robot according to the feature descriptor to determine the primary pose of the robot;
determining a second point cloud located within the preliminary pose preset range;
and determining a matching error of the first point cloud and the second point cloud by a point cloud matching method, and determining the repositioning pose of the robot according to the matching error.
2. The method of claim 1, wherein the determining the repositioning pose of the robot based on the matching error comprises:
representing the first point cloud and the second point cloud by an occupancy probability grid in case a minimum value of a matching error of the first point cloud and the second point cloud is larger than a predetermined first error threshold;
and matching the second point cloud with the first point cloud through a branch-and-bound method, and determining the repositioning pose of the robot according to the condition that the sum of the occupation probability scores of the grid areas in the second point cloud matched with the first point cloud is maximum.
3. The method of claim 2, wherein the determining the repositioning pose of the robot based on the situation where the sum of occupancy probability scores of grid regions in the second point cloud matched by the first point cloud is greatest comprises:
determining a first pose of the robot under the condition that the sum of the occupation probability scores of the grid areas in the second point cloud matched with the first point cloud is maximum;
and under the condition that the matching error of the first pose and the pose corresponding to the first point cloud is larger than a preset second error threshold, determining the repositioning pose of the robot by a violent matching method.
4. The method of claim 1, wherein the extracting feature descriptors in the first point cloud comprises:
performing radial division and circumferential division on the first point cloud to obtain a first grid set, and/or performing horizontal division and vertical division on the first point cloud to obtain a second grid set;
and determining a first characteristic description submatrix corresponding to the first grid set and a second characteristic description submatrix corresponding to the second grid set according to the descriptors, wherein the maximum value of the point cloud in the grid in the vertical horizontal direction is used as the descriptors of the grid.
5. The method of claim 4, wherein the preliminary repositioning of the robot based on the feature descriptors comprises:
compressing the first feature description submatrix and/or the second feature description submatrix into one-dimensional features;
according to the one-dimensional characteristics of the first point cloud and the one-dimensional characteristics of the key frames in the map building, matching, and determining candidate key frames;
and matching the candidate key frame with the first feature description submatrix and/or the second feature description submatrix of the first point cloud, and performing preliminary repositioning on the robot.
6. The method of claim 5, wherein compressing the first and/or second feature description sub-matrices into one-dimensional features comprises:
determining the one-dimensional feature according to the sum value or the average value of descriptors of each row in the first feature description sub-matrix and/or the second feature description sub-matrix;
or determining the one-dimensional feature according to the sum value or the average value of descriptors of each column in the first feature description sub-matrix and/or the second feature description sub-matrix.
7. The method according to any one of claims 1-6, wherein preliminary repositioning of the robot according to the feature descriptors comprises:
determining a local key frame according to the initial pose of the robot;
and performing preliminary repositioning on the robot according to the feature descriptors of the local key frames and the feature descriptors of the first point cloud.
8. A repositioning apparatus for a robot, the apparatus comprising:
the point cloud acquisition unit is used for acquiring a first point cloud of a scene where the robot is located;
the initial pose determining unit is used for extracting feature descriptors in the first point cloud, and performing initial repositioning on the robot according to the feature descriptors to determine the initial pose of the robot;
a second point cloud determining unit for determining a second point cloud located within the predetermined range of the preliminary pose;
and the repositioning pose determining unit is used for determining the matching error of the first point cloud and the second point cloud through a point cloud matching method and determining the repositioning pose of the robot according to the matching error.
9. A robot comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1 to 7.
CN202310446731.8A 2023-04-14 2023-04-14 Robot and repositioning method, device and storage medium thereof Pending CN116580215A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310446731.8A CN116580215A (en) 2023-04-14 2023-04-14 Robot and repositioning method, device and storage medium thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310446731.8A CN116580215A (en) 2023-04-14 2023-04-14 Robot and repositioning method, device and storage medium thereof

Publications (1)

Publication Number Publication Date
CN116580215A true CN116580215A (en) 2023-08-11

Family

ID=87543702

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310446731.8A Pending CN116580215A (en) 2023-04-14 2023-04-14 Robot and repositioning method, device and storage medium thereof

Country Status (1)

Country Link
CN (1) CN116580215A (en)

Similar Documents

Publication Publication Date Title
US20210063577A1 (en) Robot relocalization method and apparatus and robot using the same
CN111612841B (en) Target positioning method and device, mobile robot and readable storage medium
CN111427032B (en) Room wall contour recognition method based on millimeter wave radar and terminal equipment
CN109712071B (en) Unmanned aerial vehicle image splicing and positioning method based on track constraint
CN112595323A (en) Robot and drawing establishing method and device thereof
CN112198878B (en) Instant map construction method and device, robot and storage medium
CN111275821A (en) Power line fitting method, system and terminal
CN112711034A (en) Object detection method, device and equipment
CN111915657A (en) Point cloud registration method and device, electronic equipment and storage medium
CN111308500A (en) Obstacle sensing method and device based on single-line laser radar and computer terminal
CN112529827A (en) Training method and device for remote sensing image fusion model
CN115205383A (en) Camera pose determination method and device, electronic equipment and storage medium
CN115115655A (en) Object segmentation method, device, electronic device, storage medium and program product
CN114897669A (en) Labeling method and device and electronic equipment
CN115482255A (en) Obstacle tracking method, device, equipment and storage medium
CN110673607A (en) Feature point extraction method and device in dynamic scene and terminal equipment
CN109815763A (en) Detection method, device and the storage medium of two dimensional code
CN112418089A (en) Gesture recognition method and device and terminal
CN115267722A (en) Angular point extraction method and device and storage medium
CN114066930A (en) Planar target tracking method and device, terminal equipment and storage medium
CN116580215A (en) Robot and repositioning method, device and storage medium thereof
CN116358528A (en) Map updating method, map updating device, self-mobile device and storage medium
CN112950708B (en) Positioning method, positioning device and robot
CN115830342A (en) Method and device for determining detection frame, storage medium and electronic device
CN112508065B (en) Robot and positioning method and device thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination