CN113503876B - Multi-sensor fusion laser radar positioning method, system and terminal - Google Patents

Multi-sensor fusion laser radar positioning method, system and terminal Download PDF

Info

Publication number
CN113503876B
CN113503876B CN202110777452.0A CN202110777452A CN113503876B CN 113503876 B CN113503876 B CN 113503876B CN 202110777452 A CN202110777452 A CN 202110777452A CN 113503876 B CN113503876 B CN 113503876B
Authority
CN
China
Prior art keywords
pose
positioning
robot
repositioning
current moment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110777452.0A
Other languages
Chinese (zh)
Other versions
CN113503876A (en
Inventor
刘方
窦广正
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huaxin Information Technology Co Ltd
Original Assignee
Shenzhen Huaxin Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Huaxin Information Technology Co Ltd filed Critical Shenzhen Huaxin Information Technology Co Ltd
Priority to CN202110777452.0A priority Critical patent/CN113503876B/en
Publication of CN113503876A publication Critical patent/CN113503876A/en
Application granted granted Critical
Publication of CN113503876B publication Critical patent/CN113503876B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/18Stabilised platforms, e.g. by gyroscope
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1652Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/42Simultaneous measurement of distance and other co-ordinates
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Traffic Control Systems (AREA)

Abstract

According to the multi-sensor fusion laser radar positioning method, system and terminal, a two-dimensional map is constructed by utilizing data acquired by a laser radar sensor, an odometer and a gyroscope, and a high-precision pose can be obtained by utilizing scanning points of the current laser radar and the built two-dimensional map to perform scanning matching, and no accumulated error exists; and when the robot is interfered by a dynamic object to cause error positioning, a repositioning thread and/or a fusion positioning thread are started, so that the problem that the robot does not know the position of the robot for a long time and loses the autonomous behavior ability is effectively solved, the flexibility of the robot is improved, the situation of positioning errors can be effectively solved, the time consumption of local repositioning is greatly reduced relative to global repositioning, and the working stability and positioning accuracy of the robot are improved.

Description

Multi-sensor fusion laser radar positioning method, system and terminal
Technical Field
The invention relates to the field of robots, in particular to a multi-sensor fusion laser radar positioning method, a multi-sensor fusion laser radar positioning system and a multi-sensor fusion laser radar terminal.
Background
At present, the technology is changed day by day, the artificial intelligence becomes an important development direction in the future, and the autonomous robot is an important embodiment of the development of the artificial intelligence. Autonomous robots often need to have autonomous movement, judgment and behavior capabilities like a person, and for unknown environments, the positioning technology of the robot is an important precondition for autonomous behavior, so that laser radar positioning is a technology for commercial application. However, some current laser radar positioning technologies have the problems that the positioning accuracy is not high, accumulated errors are easy to occur under long-time operation, and the accumulated errors are difficult to eliminate. In addition, when the robot positioning is erroneously positioned due to the influence of a dynamic object, a lot of time is consumed by searching the map by using global repositioning, so that the robot does not know the position of the robot for a long time, the autonomous behavior capability is lost, and the working efficiency and the flexibility of the robot can be greatly reduced.
Disclosure of Invention
In view of the above-mentioned drawbacks of the prior art, the present invention aims to provide a multi-sensor fusion laser radar positioning method, system and terminal, which are used for solving the problems of low positioning accuracy, positioning cumulative error, robot pose loss during repositioning and too much time consumption in the prior art.
To achieve the above and other related objects, the present invention provides a multi-sensor fusion laser radar positioning method applied to a mobile robot, on which a laser radar sensor, an odometer and a gyroscope are disposed, the method comprising: constructing a two-dimensional map in the current environment based on the position information of each obstacle in the current environment acquired by the laser radar sensor and the pose information of the robot acquired by the odometer and the gyroscope in real time, and updating the two-dimensional map in real time; stopping updating the two-dimensional map when a control instruction corresponding to the completion of obstacle scanning is received, and storing the latest updated two-dimensional map as a final two-dimensional map; wherein the final two-dimensional map comprises: coordinate position information of each obstacle under the map coordinate system; loading the saved final two-dimensional map, and carrying out scanning matching on the position information of the obstacle acquired by the laser radar sensor and the coordinate position information of each obstacle in the two-dimensional map under the map coordinate system so as to obtain the optimal pose of the robot at the current moment; judging whether inaccurate positioning occurs in the positioning process; if the situation of inaccurate positioning does not occur, the optimal pose is used as the final pose of the robot; and if the situation of inaccurate positioning occurs, performing a repositioning process and/or a fusion positioning process to obtain the final pose of the robot.
In one embodiment of the invention, a map coordinate system is constructed by initial pose information of the robot collected by the odometer and the gyroscope; converting the position information of each obstacle in the current environment acquired by the laser radar sensor into the same robot coordinate system and then into a world coordinate system to obtain the movement distortion removal position information of each obstacle in the current environment acquired by the laser radar sensor; acquiring the predicted pose of the robot at the current moment by utilizing the pose information of the robot acquired in real time by the odometer and the gyroscope; taking the predicted pose as a central pose matched with the laser radar scanning at the current moment, searching one or more candidate poses in a set first searching range, and calculating a matching score of scanning and matching the position information of the obstacle scanned by the laser radar under each candidate pose and the coordinate position information of each obstacle on the two-dimensional map under the map coordinate system; selecting an initial pose at the current moment based on the matching score of each candidate pose, and performing nonlinear optimization on the initial pose to obtain an optimal pose at the current moment and the matching score of the optimal pose: and inserting the motion distortion removal position information of each obstacle acquired by the laser radar into the map coordinate system according to the optimal pose, and updating to obtain a two-dimensional map of the current environment.
In an embodiment of the present invention, the loading the saved final two-dimensional map, and performing scan matching on the position information of the obstacle collected by the lidar sensor and the coordinate position information of each obstacle in the two-dimensional map under the map coordinate system to obtain the optimal pose of the robot at the current moment includes: when the robot is restarted, loading the saved final two-dimensional map; based on the position and posture information of the robot and/or the optimal position and posture at the last moment acquired in real time by the odometer and the gyroscope, calculating to obtain the angular speed and the linear speed of the robot so as to obtain the position and posture change information of the robot; according to the pose change information of the robot, obtaining a predicted pose at the current moment; taking the predicted pose as a central pose matched with the laser radar scanning at the current moment, searching one or more candidate poses in a set first searching range, and calculating a matching score of scanning and matching the position information of the obstacle scanned by the laser radar under each candidate pose and the coordinate position information of each obstacle on the two-dimensional map under the map coordinate system; and selecting an initial pose at the current moment based on the matching score of each candidate pose, and performing nonlinear optimization on the initial pose to obtain an optimal pose at the current moment and the matching score of the optimal pose.
In an embodiment of the present invention, the determining whether the positioning inaccuracy occurs in the positioning process includes: the method for judging whether the positioning inaccuracy occurs in the positioning process comprises the following steps: judging whether inaccurate positioning occurs in the positioning process based on the inaccurate positioning condition; if the positioning inaccuracy condition is met, judging that the positioning inaccuracy condition occurs in the positioning process; if the positioning inaccuracy condition is not met, judging that the positioning inaccuracy condition does not occur in the positioning process; wherein the positioning misalignment condition comprises: and the matching score of the optimal pose at the current moment is smaller than a set first threshold value, the deviation of the optimal pose at the current moment and the optimal pose at the last moment is larger than a set second threshold value, the difference value of the optimal pose at the current moment and the predicted pose at the current moment is larger than a set third threshold value, the difference value of the predicted pose at the current moment and the optimal pose at the last moment is smaller than a set fourth threshold value, and the number of laser radar scanning points with the distance smaller than the set threshold value accounts for less than 40 percent.
In an embodiment of the present invention, the manner of performing the repositioning process and/or fusing the positioning process to obtain the final pose of the robot if the positioning is inaccurate includes: if the situation of inaccurate positioning occurs, judging whether the repositioning condition and/or the fusion positioning condition are met or not to obtain an optimal pose at the current moment obtained by a repositioning process conforming to the repositioning condition and/or a fusion positioning process conforming to the fusion positioning condition so as to obtain a final pose at the current moment; wherein the relocation condition includes: the matching score of the optimal pose at the current moment is smaller than a set repositioning score threshold value; the fusion positioning conditions include: the matching score of the optimal pose at the current moment is smaller than a set fusion positioning score threshold value, the deviation of the optimal pose at the current moment and the optimal pose at the last moment is larger than a set second threshold value, the difference value of the optimal pose at the current moment and the predicted pose at the current moment is larger than a set third threshold value, the difference value of the predicted pose at the current moment and the optimal pose at the last moment is smaller than a set fourth threshold value, the repositioning process failure condition and the number ratio of laser radar scanning points is smaller than 40% or more.
In an embodiment of the present invention, the relocation procedure includes:
taking the predicted pose as a central pose matched with the laser radar scanning at the current moment, searching one or more candidate poses in a set second searching range which is larger than the first searching range, and calculating a matching score of scanning and matching the position information of the obstacle scanned by the laser radar under each candidate pose and the coordinate position information of each obstacle on the two-dimensional map under the map coordinate system; selecting an initial pose at the current moment based on the matching score of each candidate pose, and performing nonlinear optimization on the initial pose to obtain a local repositioning pose at the current moment and the matching score of the local repositioning pose; if the matching score of the local repositioning pose is larger than the score threshold of the local repositioning, pose transformation information is obtained according to pose information of the robot at the current moment and pose information before repositioning acquired by the odometer and the gyroscope, and then the optimal pose of the robot at the current moment after the local repositioning is obtained by combining the repositioning pose at the current moment; if the matching score of the local repositioning pose is smaller than the score threshold of local repositioning, the local repositioning fails, the second search scope is replaced by one or more search scopes which are larger than the second search scope and smaller than the global search scope, and/or the score threshold of local repositioning is replaced by one or more score thresholds which are smaller than the score threshold of local repositioning, so that one or more local repositioning is performed.
In an embodiment of the present invention, the relocation procedure further includes: if the local repositioning times are greater than the local repositioning times threshold, taking the center of the two-dimensional map as the central pose of laser radar scanning at the current moment, searching one or more candidate poses in a set global searching range, and calculating the matching score of scanning and matching the position information of the obstacle scanned by the laser radar under each candidate pose and the coordinate position information of each obstacle on the two-dimensional map under the map coordinate system;
selecting an initial pose at the current moment based on the matching score of each candidate pose, and performing nonlinear optimization on the initial pose to obtain a global repositioning pose at the current moment and the matching score of the global repositioning pose;
if the matching score of the global repositioning pose is larger than the score threshold of global repositioning, pose transformation information during repositioning is obtained according to pose information of the robot at the current moment and pose information before repositioning acquired by the odometer and the gyroscope, and then the optimal pose of the robot at the current moment after global repositioning is obtained by combining the global repositioning pose at the current moment; if the matching score of the global repositioning pose is smaller than the score threshold of global repositioning, the global repositioning fails, namely the repositioning flow fails.
In an embodiment of the present invention, the fused positioning procedure includes: and taking the predicted pose at the current moment as the optimal pose of the robot.
To achieve the above and other related objects, the present invention provides a multi-sensor fusion laser radar positioning system applied to a mobile robot, on which a laser radar sensor, an odometer and a gyroscope are disposed, the system comprising: the map building module is used for building a two-dimensional map in the current environment based on the position information of each obstacle in the current environment acquired by the laser radar sensor and the pose information of the robot acquired by the odometer and the gyroscope in real time, and updating the two-dimensional map in real time; the map stopping updating module is connected with the map building module and is used for stopping updating the two-dimensional map when receiving a control instruction of finishing scanning of a corresponding obstacle, and storing the latest updated two-dimensional map as a final two-dimensional map; wherein the final two-dimensional map comprises: coordinate position information of each obstacle under the map coordinate system; the scanning matching module is connected with the map stop updating module and used for loading the saved final two-dimensional map, and carrying out scanning matching on the position information of the obstacles collected by the laser radar sensor and the coordinate position information of each obstacle in the two-dimensional map under the map coordinate system so as to obtain the optimal pose of the robot at the current moment; the positioning judging module is connected with the scanning matching module and used for judging whether inaccurate positioning occurs in the positioning process; the positioning accuracy module is connected with the positioning judgment module and is used for taking the optimal pose as the final pose of the robot if the situation of inaccurate positioning does not occur; and the positioning inaccuracy solving module is connected with the positioning judging module and is used for carrying out a repositioning process and/or a fusion positioning process if the situation of inaccurate positioning occurs so as to obtain the final pose of the robot.
To achieve the above and other related objects, the present invention provides a multi-sensor fusion lidar positioning terminal, including: a memory for storing a computer program; and the processor is used for executing the laser radar positioning method of the multi-sensor fusion.
As described above, the multi-sensor fusion laser radar positioning method, the multi-sensor fusion laser radar positioning system and the terminal have the following beneficial effects: the invention constructs a two-dimensional map by utilizing the data collected by the laser radar sensor, the odometer and the gyroscope, and can obtain a high-precision pose by utilizing the scanning point of the current laser radar to carry out scanning matching with the built two-dimensional map without accumulated error; and when the robot is interfered by a dynamic object to cause error positioning, a repositioning thread or a fusion positioning thread is started, so that the problem that the robot does not know the position of the robot for a long time and loses the autonomous behavior ability is effectively solved, the flexibility of the robot is improved, the situation of positioning errors can be effectively solved, the time consumption of local repositioning is greatly reduced relative to global repositioning, and the working stability and the positioning accuracy of the robot are improved.
Drawings
Fig. 1 is a flow chart of a laser radar positioning method with multi-sensor fusion according to an embodiment of the invention.
Fig. 2 is a schematic structural diagram of a multi-sensor fusion lidar positioning system according to an embodiment of the invention.
Fig. 3 is a schematic structural diagram of a multi-sensor fusion lidar positioning terminal according to an embodiment of the invention.
Detailed Description
Other advantages and effects of the present invention will become apparent to those skilled in the art from the following disclosure, which describes the embodiments of the present invention with reference to specific examples. The invention may be practiced or carried out in other embodiments that depart from the specific details, and the details of the present description may be modified or varied from the spirit and scope of the present invention. It should be noted that the following embodiments and features in the embodiments may be combined with each other without conflict.
In the following description, reference is made to the accompanying drawings, which illustrate several embodiments of the invention. It is to be understood that other embodiments may be utilized and that mechanical, structural, electrical, and operational changes may be made without departing from the spirit and scope of the present invention. The following detailed description is not to be taken in a limiting sense, and the scope of embodiments of the present invention is defined only by the claims of the issued patent. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. Spatially relative terms, such as "upper," "lower," "left," "right," "lower," "below," "lower," "above," "upper," and the like, may be used herein to facilitate a description of one element or feature as illustrated in the figures relative to another element or feature.
Throughout the specification, when a portion is said to be "connected" to another portion, this includes not only the case of "direct connection" but also the case of "indirect connection" with other elements interposed therebetween. In addition, when a certain component is said to be "included" in a certain section, unless otherwise stated, other components are not excluded, but it is meant that other components may be included.
The first, second, and third terms are used herein to describe various portions, components, regions, layers and/or sections, but are not limited thereto. These terms are only used to distinguish one portion, component, region, layer or section from another portion, component, region, layer or section. Thus, a first portion, component, region, layer or section discussed below could be termed a second portion, component, region, layer or section without departing from the scope of the present invention.
Furthermore, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including" specify the presence of stated features, operations, elements, components, items, categories, and/or groups, but do not preclude the presence, presence or addition of one or more other features, operations, elements, components, items, categories, and/or groups. The terms "or" and/or "as used herein are to be construed as inclusive, or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a, A is as follows; b, a step of preparing a composite material; c, performing operation; a and B; a and C; b and C; A. b and C). An exception to this definition will occur only when a combination of elements, functions or operations are in some way inherently mutually exclusive.
The embodiment of the invention provides a multi-sensor fusion laser radar positioning method, which utilizes data acquired by a laser radar sensor, an odometer and a gyroscope to construct a two-dimensional map, and utilizes the scanning point of the current laser radar to carry out scanning matching with the built two-dimensional map so as to obtain a high-precision pose without accumulated errors; and when the robot is interfered by a dynamic object to cause error positioning, a repositioning thread or a fusion positioning thread is started, so that the problem that the robot does not know the position of the robot for a long time and loses the autonomous behavior ability is effectively solved, the flexibility of the robot is improved, the situation of positioning errors can be effectively solved, the time consumption of local repositioning is greatly reduced relative to global repositioning, and the working stability and the positioning accuracy of the robot are improved.
The embodiments of the present invention will be described in detail below with reference to the attached drawings so that those skilled in the art to which the present invention pertains can easily implement the present invention. This invention may be embodied in many different forms and is not limited to the embodiments described herein.
As shown in fig. 1, a flow diagram of a laser radar positioning method with multi-sensor fusion in an embodiment of the invention is shown.
Applied to a mobile robot, the method comprises the following steps:
step S11: and constructing a two-dimensional map in the current environment based on the position information of each obstacle in the current environment acquired by the laser radar sensor and the pose information of the robot acquired by the odometer and the gyroscope in real time, and updating the two-dimensional map in real time.
Optionally, step S11 includes: constructing a map coordinate system according to first pose information of the robot, which is acquired by the odometer and the gyroscope; converting the position information of each obstacle in the current environment acquired by the laser radar sensor into the same robot coordinate system and then into a world coordinate system to obtain the movement distortion removal position information of each obstacle in the current environment acquired by the laser radar sensor; acquiring the predicted pose of the robot at the current moment by utilizing the pose information of the robot acquired in real time by the odometer and the gyroscope; taking the predicted pose as a central pose matched with the laser radar scanning at the current moment, searching one or more candidate poses in a set first searching range, and calculating a matching score of scanning and matching the position information of the obstacle scanned by the laser radar under each candidate pose and the coordinate position information of each obstacle on the two-dimensional map under the map coordinate system; selecting an initial pose at the current moment based on the matching score of each candidate pose, and performing nonlinear optimization on the initial pose to obtain an optimal pose at the current moment and the matching score of the optimal pose: and inserting the motion distortion removal position information of each obstacle acquired by the laser radar into the map coordinate system according to the optimal pose, and updating to obtain a two-dimensional map of the current environment.
Optionally, the converting the position information of each obstacle in the current environment collected by the laser radar sensor into the same robot coordinate system and then into the world coordinate system, so as to obtain the motion distortion removal position information of each obstacle in the current environment collected by the laser radar sensor includes: calculating pose information of the robot at the latest moment and pose information of the robot at the last moment acquired in real time by the odometer and the gyroscope to obtain the linear speed and the angular speed of the robot; for each laser spot of the laser radar, acquiring a time stamp of the laser spot, and subtracting the time stamp of the first laser spot from the time stamp of the current laser spot to obtain a time interval of robot movement. Assuming that the robot belongs to uniform motion in a time period of receiving laser radar data, calculating a robot pose corresponding to a time stamp of each laser point, establishing a coordinate system by using the current robot pose, calculating a pose transformation matrix of the robot pose corresponding to the current laser point time stamp and the robot pose corresponding to the first laser point time stamp, multiplying the pose transformation matrix by the current laser point coordinates, and finally converting all the laser point coordinates into the same robot coordinate system, thereby completing removal of laser radar motion distortion.
Optionally, the method for obtaining the predicted pose of the robot at the current moment by using the pose information of the robot acquired in real time by the odometer and the gyroscope includes: based on the pose information of the robot and/or the optimal pose at the last moment acquired in real time by the odometer and the gyroscope, calculating to obtain the acceleration and the linear speed of the robot so as to obtain the pose change information of the robot; and obtaining the predicted pose at the current moment according to the pose change information of the robot.
Optionally, selecting an initial pose at the current moment based on the matching score of each candidate pose, and performing nonlinear optimization on the initial pose to obtain an optimal pose at the current moment of the robot, where the obtaining comprises: and constructing a least square equation by using the obtained initial pose to perform nonlinear optimization, adding a cost function of the point cloud, a cost function of translation, a cost function of rotation and respective weights to perform optimization solution, wherein the finally obtained pose is the optimal pose of the robot.
Optionally, inserting the motion distortion removal position information of each obstacle into the map coordinate system in the optimal pose, and updating to obtain the two-dimensional map of the current environment includes: and inserting the motion distortion removal position information of the obstacle points scanned by the laser radar into the map coordinates in the current optimal pose, updating the coordinate positions of the obstacle points, updating the idle area between the robot and the obstacle points, and accumulating and counting the number of laser radar frames.
Optionally, the position and pose information of the robot, which is acquired in real time by the odometer and the gyroscope, is calculated by using the wheel type odometer and the gyroscope, and is sent to the positioning algorithm layer through a serial port at a certain frequency.
Step S12: and stopping updating the two-dimensional map when a control instruction corresponding to the completion of obstacle scanning is received, and storing the latest updated two-dimensional map as a final two-dimensional map.
In detail, the final two-dimensional map includes: coordinate position information of each obstacle in the map coordinate system.
Optionally, after the mapping is completed, the pure positioning mode is opened, the updating of the map is stopped, the point scanned by the laser radar is not inserted into the two-dimensional map any more, and the map is stored.
Optionally, when a control instruction corresponding to the completion of scanning all the obstacles is received, updating the two-dimensional map is stopped, and the latest updated two-dimensional map is stored as a final two-dimensional map. Wherein the stored map is in the form of a grid map comprising: map boundary information, number of grids, and occupancy probability of each grid.
Compared with the situation that the map is continuously updated in the positioning process in the prior art, the method has the advantages that accumulated errors cannot occur in the positioning process, and the positioning accuracy is high.
Step S13: and loading the saved final two-dimensional map, and carrying out scanning matching on the position information of the obstacle acquired by the laser radar sensor and the coordinate position information of each obstacle in the two-dimensional map under the map coordinate system so as to obtain the optimal pose of the robot at the current moment.
In detail, after the robot is restarted, a final two-dimensional map is loaded at first, a pure positioning mode of the robot is set, at the moment, obstacle points scanned by the laser radar are not updated into the map any more, and the number of frames of the laser radar is not counted up; and scanning and matching the position information of the obstacles acquired by the laser radar sensor with the coordinate position information of each obstacle in the two-dimensional map under the map coordinate system to obtain the optimal pose of the robot at the current moment.
Optionally, step S13 includes: when the robot is restarted, loading the saved final two-dimensional map; specifically, after the robot is restarted, a final two-dimensional map is loaded at first, a pure positioning mode of the robot is set, at the moment, obstacle points scanned by the laser radar are not updated into the map any more, and the number of frames of the laser radar is not counted up; acquiring angular velocity and linear velocity of the robot based on pose information of the robot and/or pose calculation at the last moment acquired in real time by the odometer and the gyroscope so as to acquire pose change information of the robot; according to the pose change information of the robot, obtaining a predicted pose at the current moment; taking the predicted pose as the central pose of laser radar scanning at the current moment, searching one or more candidate poses in a set first searching range, and calculating the matching score of scanning matching between the position information of the obstacle scanned by the laser radar under each candidate pose and the coordinate position information of each obstacle on the two-dimensional map under the map coordinate system; selecting the candidate pose with the highest score as the initial pose at the current moment based on the matching score of each candidate pose, and performing nonlinear optimization on the initial pose to obtain the optimal pose at the current moment and the matching score of the optimal pose; preferably, the candidate pose with the highest matching score is selected as the initial pose.
Optionally, the calculating to obtain the angular velocity and the linear velocity of the robot based on the pose information of the robot and/or the pose of the last moment acquired in real time by the odometer and the gyroscope, so as to obtain the pose change information of the robot includes: and acquiring the time difference between the two poses by using the odometer at the moment before the optimal pose of the matched robot and the pose information of the robot acquired by the gyroscope at the last moment and the current latest moment and the pose information of the robot acquired by the gyroscope, and calculating the linear speed and the angular speed of the robot. And calculating pose change information, such as rotation and displacement, of the robot according to the linear speed and the angular speed of the robot and the time difference of the current robot moment from the last laser scanning matching.
Optionally, the method for obtaining the predicted pose at the current moment according to the pose change information of the robot includes: and adding the rotation and displacement of the robot to the optimal pose of the robot matched with the laser scanning at the previous moment to obtain the predicted pose at the current moment.
Optionally, the method for using the predicted pose as the central pose of the laser radar scan at the current time and searching for one or more candidate poses in a set first search range, and calculating a matching score of the position information of the obstacle scanned by the laser radar under each candidate pose and the coordinate position information of each obstacle on the two-dimensional map under the map coordinate system includes:
Taking the predicted pose as the central pose matched with the current laser scanning, creating a search window in a set first search range, and starting to search one or more candidate poses in a database formed by each laser point cloud scanned by the laser radar and pose information corresponding to each laser point cloud; it should be noted that, not only the size of the search window but also the search angle can be set before searching; for example, a search window of 2m x2m is established, and the search angle is 180 degrees.
Calculating the number of laser points with probability values larger than a preset threshold value, which are respectively beaten on the grid map by the corresponding laser point clouds under each candidate pose, and dividing the number by the total number of laser points to obtain a matching score of the current candidate pose; wherein, the preset threshold value can be set according to the requirement, and is preferably 0.5.
Optionally, the selecting an initial pose at the current time based on the matching score of each candidate pose, and then performing nonlinear optimization on the initial pose, so as to obtain an optimal pose at the current time and the matching score of the optimal pose includes: selecting the candidate pose with the highest matching score as the initial pose; and constructing a least square equation by using the obtained laser point cloud corresponding to the initial pose to perform nonlinear optimization, adding a cost function of the point cloud, a translational cost function, a rotational cost function and respective weights to perform optimization solution, wherein the finally obtained pose is the optimal pose of the robot, and the matching score corresponding to the optimal pose.
Step S14: judging whether inaccurate positioning occurs in the positioning process.
Optionally, the method for determining whether the positioning inaccuracy occurs in the positioning process includes: judging whether inaccurate positioning occurs in the positioning process based on the inaccurate positioning condition; if the positioning inaccuracy condition is met, judging that the positioning inaccuracy condition occurs in the positioning process; if the positioning inaccuracy condition is not met, judging that the positioning inaccuracy condition does not occur in the positioning process;
wherein the positioning misalignment condition comprises: and the matching score of the optimal pose at the current moment is smaller than a set first threshold value, the deviation of the optimal pose at the current moment and the optimal pose at the last moment is larger than a set second threshold value, the difference value of the optimal pose at the current moment and the predicted pose at the current moment is larger than a set third threshold value, the difference value of the predicted pose at the current moment and the optimal pose at the last moment is smaller than a set fourth threshold value, and the number of laser radar scanning points with the distance smaller than the set threshold value accounts for less than 40 percent.
Step S15: and if the situation of inaccurate positioning does not occur, taking the optimal pose as the final pose of the robot.
Optionally, if the positioning inaccuracy condition is not met, determining that the positioning inaccuracy condition does not occur in the positioning process, and taking the optimal pose obtained in the step 13 as the final pose of the robot.
Step S16: and if the situation of inaccurate positioning occurs, performing a repositioning process and/or a fusion positioning process to obtain the final pose of the robot.
Optionally, if the positioning is inaccurate, judging whether the repositioning condition and/or the fusion positioning condition are met, so as to obtain an optimal pose of the current moment obtained by the repositioning process conforming to the repositioning condition and/or the fusion positioning process conforming to the fusion positioning condition, so as to obtain a final pose of the current moment;
specifically, if the positioning is inaccurate, judging whether the positioning conditions are met and/or fusing the positioning conditions, and if the positioning conditions are met, performing a repositioning flow; if the fusion positioning condition is met, carrying out a fusion positioning process; and obtaining the final pose at the current moment according to the optimal pose obtained by the repositioning process and/or the fusion positioning process.
Wherein the relocation condition includes: the matching score of the optimal pose at the current moment is smaller than a set repositioning score threshold value; the fusion positioning conditions include: the matching score of the optimal pose at the current moment is smaller than a set fusion positioning score threshold value, the deviation of the optimal pose at the current moment and the optimal pose at the last moment is larger than a set second threshold value, the difference value of the optimal pose at the current moment and the predicted pose at the current moment is larger than a set third threshold value, the difference value of the predicted pose at the current moment and the optimal pose at the last moment is smaller than a set fourth threshold value, the repositioning process fails, and the number ratio of laser radar scanning points with the distance smaller than the set threshold value is lower than 40%.
It should be noted that the repositioning score threshold, the fused positioning score threshold, the second threshold, the third threshold, and the fourth threshold may be set according to specific requirements, which is not limited in the present application.
Optionally, the repositioning score threshold may be greater than the fused positioning score threshold, or may be equal to or less than the fused positioning score threshold, which is not limited herein. For example, if the repositioning score threshold is 0.2 and the fused positioning score threshold is 0.3, when the matching score of the optimal pose obtained in step S13 is less than 0.3 and greater than 0.2, the fused positioning procedure is performed; and if the number is less than 0.2, performing a repositioning process and a fusion positioning process.
Optionally, the repositioning procedure includes: the predicted pose is used as the central pose of laser radar scanning at the current moment, one or more candidate poses are searched in a second search range which is set and is larger than the first search range, and the matching score of scanning matching between the position information of the obstacle scanned by the laser radar under each candidate pose and the coordinate position information of each obstacle on the two-dimensional map under the map coordinate system is calculated; specifically, taking the predicted pose as the central pose matched with the current laser scanning, creating a search window in a second search range which is set to be larger than the first search range, and searching one or more candidate poses in a database formed by each laser point cloud scanned by the laser radar and pose information corresponding to each laser point cloud; calculating the number of laser points with probability values larger than a preset threshold value, which are respectively beaten on the grid map by the corresponding laser point clouds under each candidate pose, and dividing the number by the total number of laser points to obtain a matching score of the current candidate pose; for example, a search window of 3m x3m is established, with a search angle of 360 degrees.
Selecting an initial pose at the current moment based on the matching score of each candidate pose, and performing nonlinear optimization on the initial pose to obtain a local repositioning pose at the current moment and the matching score of the local repositioning pose; selecting the candidate pose with the highest matching score as the initial pose; and constructing a least square equation by using the obtained initial pose to perform nonlinear optimization, adding a cost function of the point cloud, a cost function of translation, a cost function of rotation and respective weights to perform optimization solution, wherein the finally obtained pose is the optimal pose of the robot and the matching score of the optimal pose.
If the matching score of the local repositioning pose is larger than the score threshold of the local repositioning, pose transformation information is obtained according to pose information of the robot at the current moment and pose information before repositioning acquired by the odometer and the gyroscope, and then the optimal pose of the robot at the current moment after the local repositioning is obtained by combining the repositioning pose at the current moment. If the matching score of the local repositioning pose is smaller than the score threshold of local repositioning, the local repositioning fails, the second search scope is replaced by one or more search scopes which are larger than the second search scope and smaller than the global search scope, and/or the score threshold of local repositioning is replaced by one or more score thresholds which are smaller than the score threshold of local repositioning, so that the local repositioning in one or more steps is performed.
Optionally, if the relocation fails, the number of relocation failures is increased by one, and when the local relocation fails continuously more than 3 times, the relocation minimum score threshold is reduced to 0.50. When the local repositioning fails continuously for more than 5 times, the search window is enlarged to be 5 x 5m, the search angle is unchanged, and the score threshold value is unchanged. And when the local relocation fails continuously for more than 10 times, starting the global relocation thread.
Optionally, similar to the local relocation step, the relocation procedure further includes: if the local repositioning times are greater than the local repositioning times threshold, taking the central position of the two-dimensional map as the central pose of laser radar scanning at the current moment, searching one or more candidate poses in a set global searching range, and calculating the matching score of scanning and matching the position information of the obstacle scanned by the laser radar under each candidate pose and the coordinate position information of each obstacle on the two-dimensional map under the map coordinate system; selecting an initial pose at the current moment based on the matching score of each candidate pose, and performing nonlinear optimization on the initial pose to obtain a global repositioning pose at the current moment and the matching score of the global repositioning pose; if the matching score of the global repositioning pose is larger than the score threshold of global repositioning, pose transformation information during repositioning is obtained according to pose information of the robot at the current moment and pose information before repositioning acquired by the odometer and the gyroscope, and then the optimal pose of the robot at the current moment after global repositioning is obtained by combining the global repositioning pose at the current moment; if the matching score of the global repositioning pose is smaller than the score threshold of global repositioning, the global repositioning fails, namely the repositioning flow fails.
Optionally, when the repositioning process is performed, one task can be divided into a plurality of subtasks, and the speed of repositioning is improved by adopting a multithreading technology, so that the speed can be improved by more than 3 times at most.
Optionally, the fusion positioning process includes:
and in the pure positioning mode, after the laser scanning matching is completed, verifying a laser scanning matching result, and when the matching score of the optimal pose at the current moment is smaller than a set fusion positioning score threshold value, the deviation of the optimal pose at the current moment and the optimal pose at the last moment is larger than a set second threshold value, and the difference value between the optimal pose at the current moment and the predicted pose at the current moment is larger than a set third threshold value, considering that the pose matched by the laser scanning is incorrect, and using the predicted pose as the optimal pose of the robot.
And when the difference value between the predicted pose at the current moment and the optimal pose at the last moment is smaller than the set fourth threshold value, the motion change of the robot is considered to be small, and in order to avoid the situation that the laser scanning matching is wrong due to a dynamic object, the predicted pose is used as the optimal pose of the robot.
When the laser radar scanning matching score is very low, local repositioning or global repositioning is started and fails, the predicted pose is used as the final pose of the current robot.
When the distance between the laser radar scanning points is smaller than 40% of the set threshold value, the robot is considered to be enclosed, and in order to avoid incorrect positioning caused by the enclosed robot, the predicted pose is used as the optimal pose of the current robot.
Optionally, both the local relocation and the global relocation use branch-and-bound algorithm acceleration and multi-thread acceleration to perform accelerated search.
Optionally, the method for obtaining the final pose at the current moment according to the optimal pose obtained by the repositioning process and/or the fusion positioning process includes the following cases:
(1) If the repositioning process is only carried out and the optimal pose is obtained, the pose is made to be the final pose;
(2) If only the fusion positioning process is carried out and the optimal pose is obtained, the pose is made to be the final pose;
(3) If the optimal pose obtained by the repositioning process and the fusion positioning process is performed at the same time, the final pose can be obtained by calculation according to the weights of the two poses.
(4) If the optimal pose obtained by the repositioning process and the fusion positioning process are carried out at the same time, directly selecting the optimal pose obtained by the repositioning process as the final pose.
(5) If the optimal pose obtained by the repositioning process and the fusion positioning process is carried out at the same time, directly selecting the optimal pose obtained by the fusion positioning process as the final pose.
By using the scheme of the optimal pose at the current moment obtained by the repositioning process conforming to the repositioning condition and/or the fused positioning process conforming to the fused positioning condition, the problems of inaccurate positioning and accumulated positioning error caused by wrong mapping caused by long-time operation can be effectively avoided; the wheel type odometer and gyroscope data are fused, and when the robot is locally repositioned or laser radar scanning matching is unreliable, the predicted pose is used as the current pose of the robot, so that the situations of wrong positioning and pose losing of the robot are avoided; while local relocation is also a significant reduction in time consumption compared to global relocation.
Optionally, the final pose of the robot is updated each time odometer data arrives. And calculating a pose transformation matrix of the current latest mileometer pose and the last moment of the mileometer pose, multiplying the pose transformation matrix by the last moment of the final pose of the robot, and updating the final pose of the robot to the current latest moment of the robot.
Optionally, the final pose of the robot is updated when the odometer data arrives, the laser radar data arrives and the repositioning is successful.
Similar to the principles of the embodiments described above, the present invention provides a multi-sensor fusion lidar positioning system.
Specific embodiments are provided below with reference to the accompanying drawings:
fig. 2 shows a schematic structural diagram of a multi-sensor fusion laser radar positioning system according to an embodiment of the present invention.
Be applied to mobile robot, be equipped with laser radar sensor, odometer and gyroscope on it, the system includes:
the mapping module 21 is configured to construct a two-dimensional map in the current environment based on the position information of each obstacle in the current environment acquired by the lidar sensor and the pose information of the robot acquired by the odometer and the gyroscope in real time, and update the two-dimensional map in real time;
the map stopping updating module 22 is connected with the map building module 21 and is used for stopping updating the two-dimensional map and storing the latest updated two-dimensional map as a final two-dimensional map when receiving a control instruction of finishing the scanning of the corresponding obstacle; wherein the final two-dimensional map comprises: coordinate position information of each obstacle under the map coordinate system;
The scanning matching module 23 is connected with the map stop updating module 22 and is used for loading the saved final two-dimensional map, and carrying out scanning matching on the position information of the obstacle acquired by the laser radar sensor and the coordinate position information of each obstacle in the two-dimensional map under the map coordinate system so as to obtain the optimal pose of the robot at the current moment;
the positioning judging module 24 is connected with the scanning matching module 23 and is used for judging whether inaccurate positioning occurs in the positioning process;
the positioning accuracy module 25 is connected with the positioning judgment module 24 and is used for taking the optimal pose as the final pose of the robot if no positioning inaccuracy exists;
the inaccurate positioning solving module 26 is connected to the positioning judging module 24, and is configured to perform a repositioning process and/or a merging positioning process if an inaccurate positioning condition occurs, so as to obtain a final pose of the robot.
It should be noted that, it should be understood that the division of the modules in the embodiment of the system of fig. 2 is merely a division of logic functions, and may be fully or partially integrated into a physical entity or may be physically separated. And these modules may all be implemented in software in the form of calls by the processing element; or can be realized in hardware; the method can also be realized in a mode that a part of modules are called by processing elements and software, and the part of modules are realized in a hardware mode;
For example, each module may be one or more integrated circuits configured to implement the above methods, e.g.: one or more application specific integrated circuits (Application Specific Integrated Circuit, abbreviated as ASIC), or one or more microprocessors (digital signal processor, abbreviated as DSP), or one or more field programmable gate arrays (Field Programmable Gate Array, abbreviated as FPGA), or the like. For another example, when a module above is implemented in the form of a processing element scheduler code, the processing element may be a general-purpose processor, such as a central processing unit (Central Processing Unit, CPU) or other processor that may invoke the program code. For another example, the modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
Therefore, the implementation principle of the multi-sensor fusion laser radar positioning system has been described in the foregoing embodiments, and thus a detailed description thereof is omitted herein.
Fig. 3 shows a schematic structural diagram of a multi-sensor fusion lidar positioning terminal 30 in an embodiment of the invention.
The multi-sensor fusion laser radar positioning terminal 30 includes: a memory 31 and a processor 32 the memory 31 is for storing a computer program; the processor 32 runs a computer program to implement the multi-sensor fusion lidar positioning method as described in fig. 1.
Alternatively, the number of the memories 31 may be one or more, and the number of the processors 32 may be one or more, and one is taken as an example in fig. 3.
Optionally, the processor 32 in the multi-sensor fusion lidar positioning terminal 30 loads one or more instructions corresponding to the process of the application program into the memory 31 according to the steps as shown in fig. 1, and the processor 32 runs the application program stored in the first memory 31, so as to implement various functions in the multi-sensor fusion lidar positioning method as shown in fig. 1.
Optionally, the memory 31 may include, but is not limited to, high speed random access memory, nonvolatile memory. Such as one or more disk storage devices, flash memory devices, or other non-volatile solid-state storage devices; the processor 32 may include, but is not limited to, a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), etc.; but also digital signal processors (Digital Signal Processing, DSP for short), application specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), field-programmable gate arrays (Field-Programmable Gate Array, FPGA for short) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
Alternatively, the processor 32 may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), etc.; but also digital signal processors (Digital Signal Processing, DSP for short), application specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), field-programmable gate arrays (Field-Programmable Gate Array, FPGA for short) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
The invention also provides a computer readable storage medium storing a computer program which when run implements the multi-sensor fusion lidar positioning method shown in fig. 1. The computer-readable storage medium may include, but is not limited to, floppy diskettes, optical disks, CD-ROMs (compact disk-read only memories), magneto-optical disks, ROMs (read-only memories), RAMs (random access memories), EPROMs (erasable programmable read only memories), EEPROMs (electrically erasable programmable read only memories), magnetic or optical cards, flash memory, or other type of media/machine-readable medium suitable for storing machine-executable instructions. The computer readable storage medium may be an article of manufacture that is not accessed by a computer device or may be a component used by an accessed computer device.
In summary, the multi-sensor fusion laser radar positioning method, system and terminal are used for constructing a two-dimensional map by utilizing data acquired by a laser radar sensor, an odometer and a gyroscope, and can obtain a high-precision pose by utilizing scanning points of the current laser radar and the built two-dimensional map for scanning and matching, and no accumulated error exists; and when the robot is interfered by a dynamic object to cause error positioning, a repositioning thread or a fusion positioning thread is started, so that the problem that the robot does not know the position of the robot for a long time and loses the autonomous behavior ability is effectively solved, the flexibility of the robot is improved, the situation of positioning errors can be effectively solved, the time consumption of local repositioning is greatly reduced relative to global repositioning, and the working stability and the positioning accuracy of the robot are improved. Therefore, the invention effectively overcomes various defects in the prior art and has high industrial utilization value.
The above embodiments are merely illustrative of the principles of the present invention and its effectiveness, and are not intended to limit the invention. Modifications and variations may be made to the above-described embodiments by those skilled in the art without departing from the spirit and scope of the invention. It is therefore intended that all equivalent modifications and changes made by those skilled in the art without departing from the spirit and technical spirit of the present invention shall be covered by the appended claims.

Claims (9)

1. The multi-sensor fusion laser radar positioning method is characterized by being applied to a mobile robot, wherein a laser radar sensor, an odometer and a gyroscope are arranged on the mobile robot, and the method comprises the following steps of:
constructing a two-dimensional map in the current environment based on the position information of each obstacle in the current environment acquired by the laser radar sensor and the pose information of the robot acquired by the odometer and the gyroscope in real time, and updating the two-dimensional map in real time;
stopping updating the two-dimensional map when a control instruction corresponding to the completion of obstacle scanning is received, and storing the latest updated two-dimensional map as a final two-dimensional map; wherein the final two-dimensional map comprises: coordinate position information of each obstacle under a map coordinate system;
loading the saved final two-dimensional map, and carrying out scanning matching on the position information of the obstacle acquired by the laser radar sensor and the coordinate position information of each obstacle in the two-dimensional map under the map coordinate system so as to obtain the optimal pose of the robot at the current moment;
judging whether inaccurate positioning occurs in the positioning process;
if the situation of inaccurate positioning does not occur, the optimal pose is used as the final pose of the robot;
If the situation of inaccurate positioning occurs, performing a repositioning process and/or a fusion positioning process to obtain the final pose of the robot;
the construction of the two-dimensional map in the current environment based on the position information of each obstacle in the current environment collected by the laser radar sensor and the pose information of the robot collected by the odometer and the gyroscope in real time, and the updating of the two-dimensional map in real time comprises the following steps:
constructing a map coordinate system according to initial pose information of the robot acquired by the odometer and the gyroscope;
converting the position information of each obstacle in the current environment acquired by the laser radar sensor into the same robot coordinate system and then into a world coordinate system to obtain the movement distortion removal position information of each obstacle in the current environment acquired by the laser radar sensor;
acquiring the predicted pose of the robot at the current moment by utilizing the pose information of the robot acquired in real time by the odometer and the gyroscope;
taking the predicted pose as a central pose matched with the laser radar scanning at the current moment, searching one or more candidate poses in a set first searching range, and calculating a matching score of scanning and matching the position information of the obstacle scanned by the laser radar under each candidate pose and the coordinate position information of each obstacle on the two-dimensional map under the map coordinate system;
Selecting an initial pose at the current moment based on the matching score of each candidate pose, and performing nonlinear optimization on the initial pose to obtain an optimal pose at the current moment and the matching score of the optimal pose:
and inserting the motion distortion removal position information of each obstacle acquired by the laser radar into the map coordinate system according to the optimal pose, and updating to obtain a two-dimensional map of the current environment.
2. The multi-sensor fusion laser radar positioning method according to claim 1, wherein the loading the saved final two-dimensional map, and the scanning and matching the position information of the obstacles collected by the laser radar sensor with the coordinate position information of each obstacle in the two-dimensional map under the map coordinate system, so as to obtain the optimal pose of the robot at the current moment, comprises:
when the robot is restarted, loading the saved final two-dimensional map;
based on the position and posture information of the robot and/or the optimal position and posture at the last moment acquired in real time by the odometer and the gyroscope, calculating to obtain the angular speed and the linear speed of the robot so as to obtain the position and posture change information of the robot;
According to the pose change information of the robot, obtaining a predicted pose at the current moment;
taking the predicted pose as a central pose matched with the laser radar scanning at the current moment, searching one or more candidate poses in a set first searching range, and calculating a matching score of scanning and matching the position information of the obstacle scanned by the laser radar under each candidate pose and the coordinate position information of each obstacle on the two-dimensional map under the map coordinate system;
and selecting an initial pose at the current moment based on the matching score of each candidate pose, and performing nonlinear optimization on the initial pose to obtain an optimal pose at the current moment and the matching score of the optimal pose.
3. The multi-sensor fusion lidar positioning method of claim 2, wherein the determining whether the positioning misalignment occurs during the positioning process comprises:
judging whether inaccurate positioning occurs in the positioning process based on the inaccurate positioning condition;
if the positioning inaccuracy condition is met, judging that the positioning inaccuracy condition exists in the positioning process;
if the positioning inaccuracy condition is not met, judging that the positioning inaccuracy condition does not occur in the positioning process;
Wherein the positioning misalignment condition comprises: and the matching score of the optimal pose at the current moment is smaller than a set first threshold value, the deviation of the optimal pose at the current moment and the optimal pose at the last moment is larger than a set second threshold value, the difference value of the optimal pose at the current moment and the predicted pose at the current moment is larger than a set third threshold value, the difference value of the predicted pose at the current moment and the optimal pose at the last moment is smaller than a set fourth threshold value, and the number of laser radar scanning points with the distance smaller than the set threshold value accounts for less than 40 percent.
4. A multi-sensor fusion lidar positioning method according to claim 3, wherein the performing a repositioning procedure and/or a fusion positioning procedure to obtain the final pose of the robot if the positioning is inaccurate comprises:
if the situation of inaccurate positioning occurs, judging whether the repositioning condition and/or the fusion positioning condition are met or not to obtain an optimal pose at the current moment obtained by a repositioning process conforming to the repositioning condition and/or a fusion positioning process conforming to the fusion positioning condition so as to obtain a final pose at the current moment;
Wherein the relocation condition includes: the matching score of the optimal pose at the current moment is smaller than a set repositioning score threshold value; the fusion positioning conditions include: the matching score of the optimal pose at the current moment is smaller than a set fusion positioning score threshold value, the deviation of the optimal pose at the current moment and the optimal pose at the last moment is larger than a set second threshold value, the difference value of the optimal pose at the current moment and the predicted pose at the current moment is larger than a set third threshold value, the difference value of the predicted pose at the current moment and the optimal pose at the last moment is smaller than a set fourth threshold value, the repositioning process fails, and the number ratio of laser radar scanning points with the distance smaller than the set threshold value is lower than 40%.
5. The multi-sensor fusion lidar positioning method of claim 4, wherein the repositioning procedure comprises:
taking the predicted pose as a central pose matched with the laser radar scanning at the current moment, searching one or more candidate poses in a set second searching range which is larger than the first searching range, and calculating a matching score of scanning and matching the position information of the obstacle scanned by the laser radar under each candidate pose and the coordinate position information of each obstacle on the two-dimensional map under the map coordinate system;
Selecting an initial pose at the current moment based on the matching score of each candidate pose, and performing nonlinear optimization on the initial pose to obtain a local repositioning pose at the current moment and the matching score of the local repositioning pose;
if the matching score of the local repositioning pose is larger than the score threshold of the local repositioning, pose transformation information is obtained according to pose information of the robot at the current moment and pose information before repositioning acquired by the odometer and the gyroscope, and then the optimal pose of the robot at the current moment after the local repositioning is obtained by combining the repositioning pose at the current moment;
if the matching score of the local repositioning pose is smaller than the score threshold of local repositioning, the local repositioning fails, the second search scope is replaced by one or more search scopes which are larger than the second search scope and smaller than the global search scope, and/or the score threshold of local repositioning is replaced by one or more score thresholds which are smaller than the score threshold of local repositioning, so that one or more local repositioning is performed.
6. The multi-sensor fusion lidar positioning method of claim 5, wherein the repositioning procedure further comprises:
If the local repositioning times are greater than the local repositioning times threshold, taking the center of the two-dimensional map as the central pose of laser radar scanning at the current moment, searching one or more candidate poses in a set global searching range, and calculating the matching score of scanning and matching the position information of the obstacle scanned by the laser radar under each candidate pose and the coordinate position information of each obstacle on the two-dimensional map under the map coordinate system;
selecting an initial pose at the current moment based on the matching score of each candidate pose, and performing nonlinear optimization on the initial pose to obtain a global repositioning pose at the current moment and the matching score of the global repositioning pose;
if the matching score of the global repositioning pose is larger than the score threshold of global repositioning, pose transformation information during repositioning is obtained according to pose information of the robot at the current moment and pose information before repositioning acquired by the odometer and the gyroscope, and then the optimal pose of the robot at the current moment after global repositioning is obtained by combining the global repositioning pose at the current moment;
if the matching score of the global repositioning pose is smaller than the score threshold of global repositioning, the global repositioning fails, namely the repositioning flow fails.
7. The multi-sensor fusion lidar positioning method of claim 4, wherein the fusion positioning procedure comprises:
and taking the predicted pose at the current moment as the optimal pose of the robot.
8. A multi-sensor fused lidar positioning system for a mobile robot having a lidar sensor, an odometer and a gyroscope disposed thereon, the system comprising:
the map building module is used for building a two-dimensional map in the current environment based on the position information of each obstacle in the current environment acquired by the laser radar sensor and the pose information of the robot acquired by the odometer and the gyroscope in real time, and updating the two-dimensional map in real time;
the map stopping updating module is connected with the map building module and is used for stopping updating the two-dimensional map when receiving a control instruction of finishing scanning of a corresponding obstacle, and storing the latest updated two-dimensional map as a final two-dimensional map; wherein the final two-dimensional map comprises: coordinate position information of each obstacle under a map coordinate system;
the scanning matching module is connected with the map stop updating module and used for loading the saved final two-dimensional map, and carrying out scanning matching on the position information of the obstacles collected by the laser radar sensor and the coordinate position information of each obstacle in the two-dimensional map under the map coordinate system so as to obtain the optimal pose of the robot at the current moment;
The positioning judging module is connected with the scanning matching module and used for judging whether inaccurate positioning occurs in the positioning process;
the positioning accuracy module is connected with the positioning judgment module and is used for taking the optimal pose as the final pose of the robot if the situation of inaccurate positioning does not occur;
the positioning inaccuracy solving module is connected with the positioning judging module and is used for carrying out a repositioning process and/or a fusion positioning process if the situation of inaccurate positioning occurs so as to obtain the final pose of the robot;
the construction of the two-dimensional map in the current environment based on the position information of each obstacle in the current environment collected by the laser radar sensor and the pose information of the robot collected by the odometer and the gyroscope in real time, and the updating of the two-dimensional map in real time comprises the following steps:
constructing a map coordinate system according to initial pose information of the robot acquired by the odometer and the gyroscope;
converting the position information of each obstacle in the current environment acquired by the laser radar sensor into the same robot coordinate system and then into a world coordinate system to obtain the movement distortion removal position information of each obstacle in the current environment acquired by the laser radar sensor;
Acquiring the predicted pose of the robot at the current moment by utilizing the pose information of the robot acquired in real time by the odometer and the gyroscope;
taking the predicted pose as a central pose matched with the laser radar scanning at the current moment, searching one or more candidate poses in a set first searching range, and calculating a matching score of scanning and matching the position information of the obstacle scanned by the laser radar under each candidate pose and the coordinate position information of each obstacle on the two-dimensional map under the map coordinate system;
selecting an initial pose at the current moment based on the matching score of each candidate pose, and performing nonlinear optimization on the initial pose to obtain an optimal pose at the current moment and the matching score of the optimal pose:
and inserting the motion distortion removal position information of each obstacle acquired by the laser radar into the map coordinate system according to the optimal pose, and updating to obtain a two-dimensional map of the current environment.
9. A multi-sensor fusion lidar positioning terminal, comprising:
a memory for storing a computer program;
a processor for performing the multi-sensor fusion lidar localization method of any of claims 1 to 7.
CN202110777452.0A 2021-07-09 2021-07-09 Multi-sensor fusion laser radar positioning method, system and terminal Active CN113503876B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110777452.0A CN113503876B (en) 2021-07-09 2021-07-09 Multi-sensor fusion laser radar positioning method, system and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110777452.0A CN113503876B (en) 2021-07-09 2021-07-09 Multi-sensor fusion laser radar positioning method, system and terminal

Publications (2)

Publication Number Publication Date
CN113503876A CN113503876A (en) 2021-10-15
CN113503876B true CN113503876B (en) 2023-11-21

Family

ID=78012460

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110777452.0A Active CN113503876B (en) 2021-07-09 2021-07-09 Multi-sensor fusion laser radar positioning method, system and terminal

Country Status (1)

Country Link
CN (1) CN113503876B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024001339A1 (en) * 2022-07-01 2024-01-04 华为云计算技术有限公司 Pose determination method and apparatus, and computing device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105241461A (en) * 2015-11-16 2016-01-13 曾彦平 Map creating and positioning method of robot and robot system
CN108931245A (en) * 2018-08-02 2018-12-04 上海思岚科技有限公司 The local method for self-locating and equipment of mobile robot
CN110285806A (en) * 2019-07-05 2019-09-27 电子科技大学 The quick Precision Orientation Algorithm of mobile robot based on the correction of multiple pose
CN111536964A (en) * 2020-07-09 2020-08-14 浙江大华技术股份有限公司 Robot positioning method and device, and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108717710B (en) * 2018-05-18 2022-04-22 京东方科技集团股份有限公司 Positioning method, device and system in indoor environment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105241461A (en) * 2015-11-16 2016-01-13 曾彦平 Map creating and positioning method of robot and robot system
CN108931245A (en) * 2018-08-02 2018-12-04 上海思岚科技有限公司 The local method for self-locating and equipment of mobile robot
CN110285806A (en) * 2019-07-05 2019-09-27 电子科技大学 The quick Precision Orientation Algorithm of mobile robot based on the correction of multiple pose
CN111536964A (en) * 2020-07-09 2020-08-14 浙江大华技术股份有限公司 Robot positioning method and device, and storage medium

Also Published As

Publication number Publication date
CN113503876A (en) 2021-10-15

Similar Documents

Publication Publication Date Title
JP6987797B2 (en) Laser scanner with real-time online egomotion estimation
CN109211251B (en) Instant positioning and map construction method based on laser and two-dimensional code fusion
WO2021135645A1 (en) Map updating method and device
Holz et al. Sancta simplicitas-on the efficiency and achievable results of SLAM using ICP-based incremental registration
CN109509210B (en) Obstacle tracking method and device
Saarinen et al. Normal distributions transform occupancy maps: Application to large-scale online 3D mapping
CN111402339B (en) Real-time positioning method, device, system and storage medium
CN108638062B (en) Robot positioning method, device, positioning equipment and storage medium
CN110930495A (en) Multi-unmanned aerial vehicle cooperation-based ICP point cloud map fusion method, system, device and storage medium
CN111895989A (en) Robot positioning method and device and electronic equipment
US9651388B1 (en) System and method for improved simultaneous localization and mapping
CN113514843A (en) Multi-subgraph laser radar positioning method and system and terminal
CN113587933B (en) Indoor mobile robot positioning method based on branch-and-bound algorithm
JP2019028988A (en) System and method for executing fault-tolerant simultaneous localization and mapping in robotic clusters
CN110986956B (en) Autonomous learning global positioning method based on improved Monte Carlo algorithm
CN111680673A (en) Method, device and equipment for detecting dynamic object in grid map
CN113503876B (en) Multi-sensor fusion laser radar positioning method, system and terminal
CN112904358B (en) Laser positioning method based on geometric information
CN112327329A (en) Obstacle avoidance method, target device, and storage medium
CN111045433B (en) Obstacle avoidance method for robot, robot and computer readable storage medium
CN116608847A (en) Positioning and mapping method based on area array laser sensor and image sensor
CN113759928B (en) Mobile robot high-precision positioning method for complex large-scale indoor scene
Garrote et al. Mobile robot localization with reinforcement learning map update decision aided by an absolute indoor positioning system
JPH11194822A (en) Global map constructing method for mobile robot
KR102097722B1 (en) Apparatus and method for posture estimation of robot using big cell grid map and recording medium storing program for executing the same and computer program stored in recording medium for executing the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant