CN115326051A - Positioning method and device based on dynamic scene, robot and medium - Google Patents

Positioning method and device based on dynamic scene, robot and medium Download PDF

Info

Publication number
CN115326051A
CN115326051A CN202210929111.5A CN202210929111A CN115326051A CN 115326051 A CN115326051 A CN 115326051A CN 202210929111 A CN202210929111 A CN 202210929111A CN 115326051 A CN115326051 A CN 115326051A
Authority
CN
China
Prior art keywords
frame
attitude
laser
map
current frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210929111.5A
Other languages
Chinese (zh)
Inventor
李瀚文
柏林
刘彪
舒海燕
袁添厦
沈创芸
祝涛剑
方映峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Gosuncn Robot Co Ltd
Original Assignee
Guangzhou Gosuncn Robot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Gosuncn Robot Co Ltd filed Critical Guangzhou Gosuncn Robot Co Ltd
Priority to CN202210929111.5A priority Critical patent/CN115326051A/en
Publication of CN115326051A publication Critical patent/CN115326051A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a positioning method based on a dynamic scene, which comprises the following steps: s1, positioning is carried out by using the acquired 3D laser radar data and based on a priori map; s2, when the positioning result based on the prior map is incorrect, the laser odometer is used for obtaining the posture T of the robot m (ii) a S3, adding the posture transformation obtained by the current frame and the previous frame of laser odometer into a factor graph, wherein the last time position exists in the factor graph, obtaining the initial value of the current time position through the transformation and the transformation of the predicted position of the laser odometer, and using the initial value as the registration of the observed NDTAnd (5) setting the initial value and the final NDT registration result as the current accurate pose. The method comprises the steps of firstly, carrying out attitude matching by using a priori map of the laser radar, obtaining attitude transformation of the current frame and the previous frame by using the laser odometer under the condition of failure matching, and obtaining the final attitude by using the attitude transformation obtained by the laser odometer as an initial value of observation.

Description

Positioning method and device based on dynamic scene, robot and medium
Technical Field
The invention relates to the technical field of robots, in particular to a positioning method and device based on a dynamic scene, a robot and a medium.
Background
The existing outdoor laser SLAM positioning algorithm is assumed by a static environment, and based on the existing prior map positioning, the registration algorithm is mostly used as an observation correction prediction attitude. However, when dynamic obstacles including pedestrians, driving vehicles, etc. occur in the environment or the prior map has significant changes, the static registration algorithm is difficult to accurately converge, and the positioning accuracy is greatly reduced.
The outdoor patrol inspection robot usually works in large-scale scenes such as parks, factories and parking lots. For example, in a parking lot, a priori map is established first, functions such as navigation and the like are started, then the map is changed due to the fact that most vehicles disappear, and at the moment, the positioning system based on the priori map is diverged in positioning and registration due to the change of the map and cannot converge to the most appropriate posture, so that the robustness of the positioning system is greatly reduced.
The current approach to solving the dynamic scenario is as follows: when the positioning is lost or the scene change is found to be large, the prior map is reestablished. And the operation is relatively complicated by reestablishing the prior map.
The background description provided herein is for the purpose of generally presenting the context of the disclosure. Unless otherwise indicated herein, the material described in this section is not prior art to the claims in this application and is not admitted to be prior art by inclusion in this section.
Disclosure of Invention
In order to solve the above technical problems in the related art, the present invention provides 1 a positioning method based on a dynamic scene, comprising the following steps:
s1, positioning is carried out by using the acquired 3D laser radar data and based on a priori map;
s2, when the positioning result based on the prior map is incorrect, the laser odometer is used for obtaining the posture T of the robot m
And S3, adding the posture transformation obtained by the current frame and the previous frame of laser odometer into a factor graph, wherein the posture at the previous moment exists in the factor graph, obtaining an initial value of the posture at the current moment through the transformation and the transformation of the predicted posture of the laser odometer, using the initial value as an initial value for observing NDT registration, and finally obtaining the accurate posture of the NDT registration.
Specifically, the positioning based on the prior map is based on NDT registration.
Specifically, the step S1 specifically includes: s11, dividing the prior map into cubes;
s12, calculating a probability distribution model of each cube;
s13, converting the current frame obtained by the laser radar into a cube corresponding to map according to the prior attitude to obtain a corresponding conversion point x' i
S14, calculating the probability density of each conversion point in the corresponding cube according to the probability distribution model;
s15, adding the probability densities calculated by each cube to obtain an NDT registration score;
s16, optimizing the score according to a Newton optimization algorithm, and finding an optimal posture to enable the score value to be maximum;
s17, judging whether the score exceeds a threshold value or not, and if not, carrying out incorrect positioning result based on the prior map.
Specifically, the step S2 specifically includes:
s21, acquiring a key frame, adding the key frame into a key frame queue, and simultaneously storing the laser posture of the key frame;
s22, obtaining a subgraph;
s23, carrying out GICP algorithm registration on the current frame and the previous frame to obtain relative attitude transformation T between the two frames s
S24, adding T s Multiplying the attitude of the previous frame to obtain the attitude predicted value of the current frame under the coordinate system of the prior map
Figure BDA0003780884630000031
S25, utilizing the attitude predicted value
Figure BDA0003780884630000032
And performing registration of the current frame and the sub-map by a GICP algorithm to obtain the attitude T of the current frame under the coordinate system of the prior map m
Specifically, the step S21 specifically includes:
judging whether the current frame obtained by the laser odometer is a first frame or not, if so, adding the current frame into a key frame queue, and storing the current laser attitude;
otherwise, judging whether the current frame and the previous frame exceed a first threshold value or the angle is rotated by a first threshold value degree, if so, adding the current frame into the key frame queue.
In a second aspect, another embodiment of the present invention discloses a positioning apparatus based on dynamic scenes, which includes the following units:
the prior map positioning unit is used for positioning based on a prior map by using the acquired 3D laser radar data;
a laser attitude calculation unit for acquiring the attitude T of the robot by using a laser odometer when the positioning result based on the prior map is incorrect m
And the posture correction unit is used for adding the posture transformation obtained by the current frame and the previous frame of laser odometer into a factor graph, the posture at the previous moment exists in the factor graph, the initial value of the posture at the current moment is obtained through the prediction posture transformation of the laser odometer, the initial value is used as the initial value for observing NDT registration, and the final NDT registration result is the current accurate posture.
Specifically, the prior map positioning unit further includes:
the cube dividing unit is used for dividing the prior map into cubes;
the probability distribution model acquisition unit is used for calculating a probability distribution model of each cube;
a current frame conversion unit, configured to convert a current frame obtained by the laser radar into a cube corresponding to the map according to the prior pose to obtain a corresponding conversion point x' i
The probability density calculation unit is used for calculating the probability density of each conversion point in the corresponding cube according to the probability distribution model;
the NDT registration score calculating unit is used for adding the probability densities calculated by each cube to obtain an NDT registration score;
the NDT registration score optimization unit is used for optimizing the score according to a Newton optimization algorithm, and finding the optimal posture to enable the score value to be maximum;
and the prior map positioning result judging unit is used for judging whether the score exceeds a threshold value, and if the score does not exceed the threshold value, the positioning result is incorrect based on the prior map.
Specifically, the laser attitude calculation unit further includes:
the system comprises a key frame acquisition unit, a key frame queue acquisition unit and a key frame processing unit, wherein the key frame acquisition unit is used for acquiring a key frame, adding the key frame into the key frame queue and simultaneously storing the laser posture of the key frame;
a subgraph acquisition unit for acquiring a subgraph;
an inter-frame attitude obtaining unit for performing GICP algorithm registration on the current frame and the previous frame to obtain a relative attitude transformation T between the two frames s
An inter-frame attitude prediction unit for predicting T s Multiplying the attitude of the previous frame to obtain the attitude predicted value of the current frame under the coordinate system of the prior map
Figure BDA0003780884630000041
A sub-graph pose acquisition unit for using the pose prediction value
Figure BDA0003780884630000042
And carrying out registration of the current frame and the sub-map of the sub-map by a GICP algorithm to obtain the attitude T of the current frame under the map coordinate system of the prior map m
Specifically, the key frame acquiring unit further includes:
judging whether the current frame obtained by the laser odometer is a first frame or not, if so, adding the current frame into a key frame queue, and storing the current laser attitude;
otherwise, judging whether the current frame and the previous frame exceed a first threshold value or the angle is rotated by a first threshold value degree, if so, adding the current frame into the key frame queue.
In a third aspect, another embodiment of the present invention discloses a robot, which includes a central processing unit, a memory, a 3D lidar, and a laser odometer, wherein the memory stores instructions, and the processor is configured to implement the dynamic scene-based positioning method when executing the instructions.
In a fourth aspect, another embodiment of the present invention discloses a non-volatile storage medium having stored thereon instructions that, when executed by a processor, are configured to implement a dynamic scene-based positioning method.
The positioning method based on the dynamic scene comprises the steps of firstly using a prior map of a laser radar to carry out attitude matching, using a laser odometer to obtain the attitude of a current frame under the condition that matching fails, and correcting the attitude obtained by the prior map by using the attitude obtained by the laser odometer to obtain a final attitude. The invention can solve the problem that the prior map is inaccurate in posture matching under the dynamic scene due to the change of dynamic objects. The method of the invention does not need to reestablish the prior map, and can avoid a large amount of operations of reestablishing the map.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a flowchart of a positioning method based on a dynamic scene according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a positioning apparatus based on a dynamic scene according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a positioning apparatus based on a dynamic scene according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present invention.
Example one
The robot of the embodiment comprises a 3D laser radar device IMU and wheel odometers.
The outdoor scene of the embodiment has more vehicles, trees, buildings and the like (generally, an outdoor robot is easy to guarantee), and the positioning method based on the prior map registration in the current scene is invalid.
Referring to fig. 1, the present embodiment discloses a positioning method based on a dynamic scene, which includes the following steps:
s1, positioning is carried out by using the acquired 3D laser radar data and based on a priori map;
the robot of this embodiment has a central processing unit, the central processing unit receives 3D lidar data through a callback function, the 3D lidar data includes but is not limited to: x-coordinate value, y-coordinate value, z-coordinate value of each point, and a timestamp. Obtaining a prior attitude by utilizing IMU data and wheel type mileage counting data, and carrying out NDT registration through the prior attitude, wherein the specific process is as follows:
s11, dividing the prior map into cubes;
the prior map of the embodiment is a map established by the robot in advance, and after the robot establishes the map, the prior map may change, for example, some dynamic scenes such as driving away of a vehicle staying in the prior map change, so that the prior map changes.
The cube distribution in this example is 20cm in size.
S12, calculating a probability distribution model of each cube;
the mean q and covariance σ of the points contained in each cube are calculated separately:
Figure BDA0003780884630000071
Figure BDA0003780884630000072
wherein x i Is the three-dimensional coordinate of the ith point in the current cube.
The probability model p (x) for each point in the cube is then:
Figure BDA0003780884630000073
where x is the three-dimensional point in the current cube.
S13, converting the current frame obtained by the laser radar into a cube corresponding to map according to the prior attitude to obtain a corresponding conversion point x' i
S14, calculating the probability density of each conversion point in the corresponding cube according to the probability distribution model;
s15, adding the probability densities calculated by each cube to obtain an NDT registration score;
Figure BDA0003780884630000074
s16, optimizing the score according to a Newton optimization algorithm, and finding an optimal posture to enable the score value to be maximum;
s17, judging whether the score exceeds a threshold value or not, and if not, carrying out incorrect positioning result based on the prior map.
Finally, whether the registration is successful is determined by score, which is currently set to a score of 2.0. At this time, in a dynamic scene, the score of the registration algorithm hardly reaches the threshold. If the threshold value is not reached, the positioning effect is not good, and at the moment, the laser odometer is started.
S2, when the positioning result based on the prior map is incorrect, the laser odometer is used for obtaining the posture T of the robot m
The robot maintains a factor of the laser odometer in advance, and the laser odometer is utilized to obtain the attitude pos between two frames, and the attitude pos is added into the whole positioning constraint.
The method comprises the following specific steps:
s21, acquiring a key frame, adding the key frame into a key frame queue, and simultaneously storing the laser attitude of the key frame;
judging whether the current frame obtained by the laser odometer is a first frame or not, if so, adding the current frame into a key frame queue, and storing the current laser attitude;
otherwise, whether the current frame and the previous frame exceed 30cm or the angle is rotated by 45 degrees is judged, and if the current frame and the previous frame exceed the threshold, the current frame is added into the key frame queue.
And judging whether the number of the key frames in the key frame queue exceeds 100 frames, if so, throwing the key frames from the queue head, and ensuring that the number of the key frames in the key frame queue is at most 100 frames.
Specifically, the method further includes, before step S21:
s20, down-sampling each frame acquired by the laser radar;
the down-sampling parameter of this embodiment is set to 0.5m.
S22, obtaining a subgraph;
and selecting 50 frames nearest to the current frame to form a submap according to the posture of each frame.
S23, carrying out GICP algorithm registration on the current frame and the previous frame to obtain relative attitude transformation T between the two frames s
S24, adding T s Multiplying the attitude of the previous frame to obtain the attitude predicted value of the current frame under the coordinate system of the prior map
Figure BDA0003780884630000081
S25, utilizing the attitude predicted value
Figure BDA0003780884630000082
And carrying out registration of the current frame and the subgraph submap by a GICP algorithm to obtain the attitude T of the current frame under the prior map coordinate system m
The calculation steps of the GICP algorithm of this embodiment are as follows:
and S30, calculating corresponding points.
S31, calculating a covariance matrix of 5 nearest neighbor points of each point in the source point cloud and the target point cloud.
S32, calculating the following formula:
Figure BDA0003780884630000091
d in the above formula i Is the euclidean distance between the corresponding points,
Figure BDA0003780884630000092
is the covariance matrix of the ith point in the target point cloud,
Figure BDA0003780884630000093
and (4) minimizing the objective function for the covariance matrix of the ith point in the source point cloud, and finding the most transformation T.
And S3, adding the posture transformation obtained by the current frame and the previous frame of laser odometer into a factor graph, wherein the posture at the previous moment exists in the factor graph, obtaining an initial value of the posture at the current moment through the transformation and the transformation of the predicted posture of the laser odometer, using the initial value as an initial value for observing NDT registration, and finally obtaining the accurate posture of the NDT registration.
The positioning method based on the dynamic scene in the embodiment includes the steps of firstly using a prior map of a laser radar to carry out attitude matching, using a laser odometer to obtain the attitude of a current frame under the condition that matching is invalid, and correcting the attitude obtained by the prior map by using the attitude obtained by the laser odometer to obtain a final attitude. The method and the device can solve the problem that in a dynamic scene, due to the change of a dynamic object, the posture matching of the prior map is inaccurate. The method of the embodiment does not need to reestablish the prior map, and can avoid a large amount of operations of reestablishing the map.
Example two
Referring to fig. 2, the present embodiment discloses a positioning apparatus based on a dynamic scene, which includes the following units:
the prior map positioning unit is used for positioning based on a prior map by using the acquired 3D laser radar data;
the robot of this embodiment has a central processing unit, and the central processing unit receives 3D lidar data through a callback function, and the 3D lidar data includes but is not limited to: x-coordinate value, y-coordinate value, z-coordinate value of each point, and a timestamp. The method comprises the following steps of obtaining a priori attitude by utilizing IMU data and wheel type odometry data, and carrying out NDT registration through the priori attitude, wherein the method also comprises the following units:
the cube dividing unit is used for dividing the prior map into cubes;
the prior map of the embodiment is a map established by the robot in advance, and after the robot establishes the map, the prior map may change, for example, some dynamic scenes such as driving away of a vehicle staying in the prior map change, so that the prior map changes.
The cube distribution in this example is 20cm in size.
The probability distribution model acquisition unit is used for calculating a probability distribution model of each cube;
the mean q and covariance σ of the points contained in each cube are calculated separately:
Figure BDA0003780884630000101
Figure BDA0003780884630000102
wherein x is i Is the three-dimensional coordinate of the ith point in the current cube.
The probability model p (x) for each point in the cube is then:
Figure BDA0003780884630000103
where x is the three-dimensional point in the current cube.
A current frame conversion unit, configured to convert a current frame obtained by the lidar into a cube corresponding to the map according to the prior attitude to obtain a corresponding conversion point x' i
The probability density calculation unit is used for calculating the probability density of each conversion point falling in the corresponding cube according to the probability distribution model;
the NDT registration score calculating unit is used for adding the probability densities calculated by each cube to obtain an NDT registration score;
Figure BDA0003780884630000111
the NDT registration score optimization unit is used for optimizing the score according to a Newton optimization algorithm, and finding the optimal posture to enable the score value to be maximum;
and the prior map positioning result judging unit is used for judging whether the score exceeds a threshold value, and if the score does not exceed the threshold value, the positioning result is incorrect based on the prior map.
Finally, whether the registration is successful is determined by score, which is currently set to a score of 2.0. At this time, in a dynamic scenario, the score of the registration algorithm hardly reaches the threshold. If the threshold value is not reached, the positioning effect is not good, and at the moment, the laser odometer is started.
A laser attitude calculation unit for acquiring the attitude T of the robot by using a laser odometer when the positioning result based on the prior map is incorrect m
The robot maintains a factor of the laser odometer in advance, and the laser odometer is utilized to obtain the attitude position between two frames, and the attitude position is added into the whole positioning constraint.
The device also comprises the following units:
the system comprises a key frame acquisition unit, a key frame queue acquisition unit and a key frame processing unit, wherein the key frame acquisition unit is used for acquiring a key frame, adding the key frame into the key frame queue and simultaneously storing the laser posture of the key frame;
judging whether the current frame obtained by the laser odometer is a first frame or not, if so, adding the current frame into a key frame queue, and storing the current laser attitude;
otherwise, judging whether the current frame and the previous frame exceed 30cm or the angle is rotated by 45 degrees, and if the current frame and the previous frame exceed the threshold, adding the current frame into the key frame queue.
And judging whether the number of the key frames in the key frame queue exceeds 100 frames, if so, throwing the key frames from the queue head, and ensuring that the number of the key frames in the key frame queue is at most 100 frames.
Specifically, the method further comprises the following steps:
the down-sampling unit is used for down-sampling each frame from the laser;
the down-sampling parameter of this embodiment is set to 0.5m.
A subgraph acquisition unit for acquiring a subgraph;
and selecting 50 frames nearest to the current frame to form a submap according to the posture of each frame.
An inter-frame attitude obtaining unit for performing GICP algorithm registration on the current frame and the previous frame to obtain a relative attitude transformation T between the two frames s
An inter-frame attitude prediction unit for predicting T s Multiplying the attitude of the previous frame to obtain the attitude predicted value of the current frame under the coordinate system of the prior map
Figure BDA0003780884630000121
A sub-graph pose acquisition unit for using the pose prediction value
Figure BDA0003780884630000122
And carrying out registration of the current frame and the subgraph submap by a GICP algorithm to obtain the attitude T of the current frame under the prior map coordinate system m
The calculation steps of the GICP algorithm of this embodiment are as follows:
and S30, calculating corresponding points.
S31, calculating a covariance matrix of 5 nearest neighbor points of each point in the source point cloud and the target point cloud.
S32, calculating the following formula:
Figure BDA0003780884630000123
d in the above formula i Is the euclidean distance between the corresponding points,
Figure BDA0003780884630000124
is the covariance matrix of the ith point in the target point cloud,
Figure BDA0003780884630000131
and (4) minimizing the objective function for the covariance matrix of the ith point in the source point cloud, and finding the most transformation T.
And the gesture correction unit is used for adding the gesture transformation obtained by the current frame and the previous frame of laser odometer into the factor graph, the gesture at the previous moment exists in the factor graph, the initial value of the gesture at the current moment is obtained through the predicted gesture transformation of the laser odometer, the initial value is used as the initial value for observing NDT registration, and the final NDT registration result is the current accurate gesture.
The positioning method based on the dynamic scene in the embodiment includes the steps of firstly using a prior map of a laser radar to carry out attitude matching, using a laser odometer to obtain the attitude of a current frame under the condition that matching is invalid, and correcting the attitude obtained by the prior map by using the attitude obtained by the laser odometer to obtain a final attitude. The method and the device can solve the problem that in a dynamic scene, due to the change of a dynamic object, the posture matching of the prior map is inaccurate. The method of the embodiment does not need to reestablish the prior map, and can avoid a large amount of operations of reestablishing the map.
EXAMPLE III
The embodiment discloses a robot, which comprises a central processing unit, a memory, a 3D laser radar and a laser odometer, wherein instructions are stored in the memory, and the processor is used for realizing a positioning method based on a dynamic scene when executing the instructions.
Example four
Referring to fig. 3, fig. 3 is a schematic structural diagram of a positioning apparatus based on a dynamic scene according to this embodiment. The dynamic scene based positioning device 20 of this embodiment comprises a processor 21, a memory 22 and a computer program stored in said memory 22 and executable on said processor 21. The steps in the above-described method embodiments are implemented when the processor 21 executes the computer program. Alternatively, the processor 21 implements the functions of the modules/units in the above-described device embodiments when executing the computer program.
Illustratively, the computer program may be partitioned into one or more modules/units, which are stored in the memory 22 and executed by the processor 21 to implement the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing certain functions, which are used to describe the execution of the computer program in the dynamic scenario based positioning apparatus 20. For example, the computer program may be divided into the modules in the second embodiment, and for the specific functions of the modules, reference is made to the working process of the apparatus in the foregoing embodiment, which is not described herein again.
The dynamic scene-based positioning apparatus 20 may include, but is not limited to, a processor 21, a memory 22. It will be appreciated by those skilled in the art that the schematic diagram is merely an example of the dynamic scenario-based positioning apparatus 20, and does not constitute a limitation of the dynamic scenario-based positioning apparatus 20, and may include more or fewer components than those shown, or some components may be combined, or different components may be included, for example, the dynamic scenario-based positioning apparatus 20 may further include an input-output device, a network access device, a bus, etc.
The Processor 21 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, and the processor 21 is a control center of the dynamic scenario based positioning apparatus 20, and various interfaces and lines are used to connect various parts of the entire dynamic scenario based positioning apparatus 20.
The memory 22 may be used for storing the computer programs and/or modules, and the processor 21 implements various functions of the dynamic scene-based positioning apparatus 20 by running or executing the computer programs and/or modules stored in the memory 22 and calling data stored in the memory 22. The memory 22 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. In addition, the memory 22 may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
Wherein, the module/unit integrated by the positioning device 20 based on dynamic scene can be stored in a computer readable storage medium if it is implemented in the form of software functional unit and sold or used as a stand-alone product. Based on such understanding, all or part of the flow of the method according to the above embodiments may be implemented by a computer program, which may be stored in a computer readable storage medium and used by the processor 21 to implement the steps of the above embodiments of the method. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
It should be noted that the above-described device embodiments are merely illustrative, where the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. In addition, in the drawings of the embodiment of the apparatus provided by the present invention, the connection relationship between the modules indicates that there is a communication connection between them, and may be specifically implemented as one or more communication buses or signal lines. One of ordinary skill in the art can understand and implement it without inventive effort.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. A positioning method based on dynamic scenes comprises the following steps:
s1, positioning is carried out by using the acquired 3D laser radar data and based on a priori map;
s2, when the positioning result based on the prior map is incorrect, the laser odometer is used for obtaining the posture T of the robot m
And S3, adding the posture transformation obtained by the current frame and the previous frame of laser odometer into a factor graph, wherein the posture at the previous moment exists in the factor graph, obtaining an initial value of the posture at the current moment through the transformation and the transformation of the predicted posture of the laser odometer, using the initial value as an initial value for observing NDT registration, and finally obtaining the accurate posture of the NDT registration.
2. The method of claim 1, the a priori map based localization being based on NDT registration.
3. The method according to claim 2, wherein the step S1 specifically comprises: s11, dividing the prior map into cubes;
s12, calculating a probability distribution model of each cube;
s13, converting the current frame obtained by the laser radar into a cube corresponding to map according to the prior attitude to obtain a corresponding conversion point x' i
S14, calculating the probability density of each conversion point in the corresponding cube according to the probability distribution model;
s15, adding the probability densities calculated by each cube to obtain an NDT registration score;
s16, optimizing the score according to a Newton optimization algorithm, and finding an optimal posture to enable the score value to be maximum;
s17, judging whether the score exceeds a threshold value or not, and if not, carrying out incorrect positioning result based on the prior map.
4. The method according to claim 1, wherein the step S2 specifically comprises:
s21, acquiring a key frame, adding the key frame into a key frame queue, and simultaneously storing the laser attitude of the key frame;
s22, obtaining a subgraph;
s23, carrying out GICP algorithm registration on the current frame and the previous frame to obtain relative attitude transformation T between the two frames s
S24, adding T s Multiplying the attitude of the previous frame to obtain the attitude predicted value of the current frame under the coordinate system of the prior map
Figure FDA0003780884620000021
S25, utilizing the attitude predicted value
Figure FDA0003780884620000022
And performing registration of the current frame and the sub-map by a GICP algorithm to obtain the attitude T of the coordinate system of the prior map of the current frame m
5. The method according to claim 4, wherein the step S21 specifically comprises:
judging whether the current frame obtained by the laser odometer is a first frame or not, if so, adding the current frame into a key frame queue, and storing the current laser attitude;
otherwise, judging whether the current frame and the previous frame exceed a first threshold value or the angle is rotated by a first threshold value degree, if so, adding the current frame into the key frame queue.
6. A dynamic scene based positioning device, comprising the following units:
the prior map positioning unit is used for positioning based on a prior map by using the acquired 3D laser radar data;
a laser attitude calculation unit for acquiring the attitude T of the robot by using a laser odometer when the positioning result based on the prior map is incorrect m
And the gesture correction unit is used for adding the gesture transformation obtained by the current frame and the previous frame of laser odometer into the factor graph, the gesture at the previous moment exists in the factor graph, the initial value of the gesture at the current moment is obtained through the predicted gesture transformation of the laser odometer, the initial value is used as the initial value for observing NDT registration, and the final NDT registration result is the current accurate gesture.
7. The apparatus of claim 6, the a priori map location unit further comprising:
the cube dividing unit is used for dividing the prior map into cubes;
the probability distribution model acquisition unit is used for calculating a probability distribution model of each cube;
a current frame conversion unit, configured to convert a current frame obtained by the laser radar into a cube corresponding to the map according to the prior pose to obtain a corresponding conversion point x' i
The probability density calculation unit is used for calculating the probability density of each conversion point falling in the corresponding cube according to the probability distribution model;
the NDT registration score calculating unit is used for adding the probability densities calculated by each cube to obtain an NDT registration score;
the NDT registration score optimization unit is used for optimizing the score according to a Newton optimization algorithm, and finding the optimal posture to enable the score value to be maximum;
and the prior map positioning result judging unit is used for judging whether the score exceeds a threshold value, and if the score does not exceed the threshold value, the positioning result is incorrect based on the prior map.
8. The apparatus of claim 6, the laser pose computation unit further comprising:
the system comprises a key frame acquisition unit, a key frame queue acquisition unit and a key frame processing unit, wherein the key frame acquisition unit is used for acquiring a key frame, adding the key frame into the key frame queue and simultaneously storing the laser posture of the key frame;
a subgraph acquisition unit for acquiring a subgraph;
an inter-frame attitude obtaining unit for performing GICP algorithm registration on the current frame and the previous frame to obtain a relative attitude transformation T between the two frames s
An inter-frame attitude prediction unit for predicting T s Multiplying the attitude of the previous frame to obtain the attitude predicted value of the current frame under the coordinate system of the prior map
Figure FDA0003780884620000031
A sub-graph pose acquisition unit for using the pose prediction value
Figure FDA0003780884620000041
And performing registration of the current frame and the sub-map by a GICP algorithm to obtain the attitude T of the coordinate system of the prior map of the current frame m
9. The apparatus of claim 8, the key frame acquisition unit further comprising:
judging whether the current frame obtained by the laser odometer is a first frame or not, if so, adding the current frame into a key frame queue, and storing the current laser attitude;
otherwise, judging whether the current frame and the previous frame exceed a first threshold value or the angle is rotated by a first threshold value degree, if so, adding the current frame into the key frame queue.
10. A robot comprising a central processor, a memory, a 3D lidar, a laser odometer, the memory having instructions stored thereon, the processor when executing the instructions being configured to implement the method of any of claims 1-5.
CN202210929111.5A 2022-08-03 2022-08-03 Positioning method and device based on dynamic scene, robot and medium Pending CN115326051A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210929111.5A CN115326051A (en) 2022-08-03 2022-08-03 Positioning method and device based on dynamic scene, robot and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210929111.5A CN115326051A (en) 2022-08-03 2022-08-03 Positioning method and device based on dynamic scene, robot and medium

Publications (1)

Publication Number Publication Date
CN115326051A true CN115326051A (en) 2022-11-11

Family

ID=83922002

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210929111.5A Pending CN115326051A (en) 2022-08-03 2022-08-03 Positioning method and device based on dynamic scene, robot and medium

Country Status (1)

Country Link
CN (1) CN115326051A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116224349A (en) * 2022-12-12 2023-06-06 珠海创智科技有限公司 Robot positioning method, system and electronic device
CN116539026A (en) * 2023-07-06 2023-08-04 杭州华橙软件技术有限公司 Map construction method, device, equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180299557A1 (en) * 2017-04-17 2018-10-18 Baidu Online Network Technology (Beijing) Co., Ltd Method and apparatus for updating maps
CN112612029A (en) * 2020-12-24 2021-04-06 哈尔滨工业大学芜湖机器人产业技术研究院 Grid map positioning method fusing NDT and ICP
US20210270609A1 (en) * 2020-03-02 2021-09-02 Beijing Baidu Netcom Science And Technology Co., Ltd. Method, apparatus, computing device and computer-readable storage medium for positioning
CN113376650A (en) * 2021-08-09 2021-09-10 浙江华睿科技股份有限公司 Mobile robot positioning method and device, electronic equipment and storage medium
CN113701760A (en) * 2021-09-01 2021-11-26 火种源码(中山)科技有限公司 Robot anti-interference positioning method and device based on sliding window pose graph optimization
WO2021253430A1 (en) * 2020-06-19 2021-12-23 深圳市大疆创新科技有限公司 Absolute pose determination method, electronic device and mobile platform
WO2022121640A1 (en) * 2020-12-07 2022-06-16 深圳市优必选科技股份有限公司 Robot relocalization method and apparatus, and robot and readable storage medium
CN114777770A (en) * 2022-03-29 2022-07-22 深圳优地科技有限公司 Robot positioning method, device, control terminal and readable storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180299557A1 (en) * 2017-04-17 2018-10-18 Baidu Online Network Technology (Beijing) Co., Ltd Method and apparatus for updating maps
US20210270609A1 (en) * 2020-03-02 2021-09-02 Beijing Baidu Netcom Science And Technology Co., Ltd. Method, apparatus, computing device and computer-readable storage medium for positioning
WO2021253430A1 (en) * 2020-06-19 2021-12-23 深圳市大疆创新科技有限公司 Absolute pose determination method, electronic device and mobile platform
WO2022121640A1 (en) * 2020-12-07 2022-06-16 深圳市优必选科技股份有限公司 Robot relocalization method and apparatus, and robot and readable storage medium
CN112612029A (en) * 2020-12-24 2021-04-06 哈尔滨工业大学芜湖机器人产业技术研究院 Grid map positioning method fusing NDT and ICP
CN113376650A (en) * 2021-08-09 2021-09-10 浙江华睿科技股份有限公司 Mobile robot positioning method and device, electronic equipment and storage medium
CN113701760A (en) * 2021-09-01 2021-11-26 火种源码(中山)科技有限公司 Robot anti-interference positioning method and device based on sliding window pose graph optimization
CN114777770A (en) * 2022-03-29 2022-07-22 深圳优地科技有限公司 Robot positioning method, device, control terminal and readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王庆闪;张军;刘元盛;张鑫晨;: "基于NDT与ICP结合的点云配准算法", 计算机工程与应用, no. 07, 1 April 2020 (2020-04-01), pages 88 - 95 *
胡向勇;洪程智;吴世全;: "基于关键帧的点云建图方法", 热带地貌, vol. 41, no. 01, 25 June 2020 (2020-06-25), pages 41 - 46 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116224349A (en) * 2022-12-12 2023-06-06 珠海创智科技有限公司 Robot positioning method, system and electronic device
CN116539026A (en) * 2023-07-06 2023-08-04 杭州华橙软件技术有限公司 Map construction method, device, equipment and storage medium
CN116539026B (en) * 2023-07-06 2023-09-29 杭州华橙软件技术有限公司 Map construction method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
JP6595182B2 (en) Systems and methods for mapping, locating, and attitude correction
CN112764053B (en) Fusion positioning method, device, equipment and computer readable storage medium
CN111026131B (en) Expansion region determining method and device, robot and storage medium
CN115326051A (en) Positioning method and device based on dynamic scene, robot and medium
CN112595337B (en) Obstacle avoidance path planning method and device, electronic device, vehicle and storage medium
CN116255992A (en) Method and device for simultaneously positioning and mapping
WO2021254019A1 (en) Method, device and system for cooperatively constructing point cloud map
CN112198878B (en) Instant map construction method and device, robot and storage medium
US20220412742A1 (en) Coordinate determination method and apparatus, computer device and storage medium
CN112686951A (en) Method, device, terminal and storage medium for determining robot position
CN112219225A (en) Positioning method, system and movable platform
CN112558036B (en) Method and device for outputting information
CN112800351B (en) Track similarity judging method, system and computer medium
CN111504335B (en) Map construction method and device, electronic equipment and storage medium
CN114646317A (en) Vehicle visual positioning navigation control method and device, computer equipment and medium
CN113808196A (en) Plane fusion positioning method and device, electronic equipment and storage medium
CN116892925A (en) 2D grid map dynamic updating method, device and robot
CN116382308B (en) Intelligent mobile machinery autonomous path finding and obstacle avoiding method, device, equipment and medium
CN112435293B (en) Method and device for determining structural parameter representation of lane line
CN112068547B (en) AMCL-based robot positioning method and device and robot
CN115409986A (en) Laser SLAM loop detection method and device based on point cloud semantics and robot
CN116502479B (en) Collision detection method and device of three-dimensional object in simulation environment
CN112965076B (en) Multi-radar positioning system and method for robot
CN115963822A (en) Particle filter positioning-based slip processing method and device, medium and robot
CN116698038A (en) Positioning loss judging method and device of robot and robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination