CN114046790A - Factor graph double-loop detection method - Google Patents

Factor graph double-loop detection method Download PDF

Info

Publication number
CN114046790A
CN114046790A CN202111232146.5A CN202111232146A CN114046790A CN 114046790 A CN114046790 A CN 114046790A CN 202111232146 A CN202111232146 A CN 202111232146A CN 114046790 A CN114046790 A CN 114046790A
Authority
CN
China
Prior art keywords
loop detection
factor graph
sensor
laser radar
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111232146.5A
Other languages
Chinese (zh)
Inventor
田志宇
曾庆化
孙克诚
刘建业
邵晨
李一能
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202111232146.5A priority Critical patent/CN114046790A/en
Publication of CN114046790A publication Critical patent/CN114046790A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1652Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • G01S19/47Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a factor graph double loop detection method, which comprises the steps of obtaining measurement information collected by multisource sensors such as a laser radar, a satellite, an IMU and the like, obtaining state information according to a self-adaptive factor graph multisource information fusion navigation algorithm, carrying out coarse loop detection based on state constraint, if the state information passes through the state information fusion navigation algorithm, carrying out accurate loop detection by using a laser radar point cloud picture, and if the double loop detection passes through the state information fusion navigation algorithm, extracting the state information and carrying out factor graph loop correction; if not, the node of the next moment is continuously detected. The invention can be applied to urban area positioning, vehicle speed measurement, attitude alignment and other scenes; the invention has good robustness and low time complexity, and can shorten the loop detection time of the factor graph.

Description

Factor graph double-loop detection method
Technical Field
The invention belongs to the field of factor graph multi-source information fusion navigation algorithms, and particularly relates to a factor graph double-loop detection method.
Background
The factor graph is a bidirectional probability graph model and internally comprises two node types: one is a variable node, which represents a variable in the global multivariate function; one is the factor node, which represents a local function in the factorization. Each local function is only related to partial variables of the global function, and if and only if the variable is an argument of the local function, a connecting edge exists between the factor node and the corresponding variable node. Factor graphs are used in many areas as a graphical tool for analyzing problems, for example: signal processing, artificial intelligence, integrated navigation, etc. In the field of combined navigation, the factor graph multi-source information fusion navigation algorithm is popular in the research field due to the characteristics of rich sensor information, plug and play and the like. For example, urban navigation positioning is completed by combining vehicle sensors and urban environment information, or robust navigation of an airplane is realized by excluding fault sensors under strong airflow interference.
In the process of combined navigation, the pose estimation often has a recursion process, namely the pose of the current frame is calculated by the pose of the previous frame, the error dispersion in the process can cause larger accumulated error, and the loop detection can effectively correct the calculated error. The loop detection is applied to the fields of laser radar and visual SLAM, and can enable a navigation system to accurately identify positions which have been used once, and solve the problem of drift of pose calculation by identifying past position information. For example, multi-source information fusion navigation is carried out under the condition that reliable geometric positioning information is lacked in urban canyons or indoors and the like, the drift of the error of the calculation type sensor along with time is the primary factor influencing the navigation precision, and the traditional adaptive factor algorithm is difficult to meet the practical application scene.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a factor graph double loopback detection method introducing state constraint.
In order to achieve the above object, the present invention provides a method for detecting double loopback on a factor graph, which specifically comprises the following steps:
step 1: acquiring measurement information of a satellite, inertial navigation and a laser radar sensor;
step 2: defining a system state variable as a variable node of a factor graph, defining sensor measurement information as the factor node, improving an error model by using an adaptive method, thereby constructing the adaptive factor graph model, acquiring the state variable at the current moment by using a factor graph optimization method, extracting state variable classification characteristics through statistical analysis, comparing the classification characteristics at each moment to perform coarse loop detection, and locking a key frame of accurate loop detection if the key frame passes the coarse loop detection;
and step 3: performing neural network training on the laser radar measurement information subjected to the coarse loop detection in the step 2 to obtain a network structure suitable for a preset scene, so that point cloud type feature extraction is performed on the two key frame laser radar measurement information locked in the step 2 by using a network, classification features are obtained, a similarity function between two frames is constructed to perform accurate loop detection, and if the two key frame laser radar measurement information passes the classification features, the loop detection is completed at two moments;
and 4, step 4: and (3) acquiring state transformation information by using the laser radar measurement information in the step (3) and adding a loop factor for correction.
Further, the sensor measurement information in step 1 includes longitude and latitude information calculated by a satellite, specific force and angular velocity information output by an inertial navigation sensor, and point cloud information output by a laser radar.
Further, in step 2, for the factor graph configuration and different types of sensors, the state variable at the current time may select various information such as longitude and latitude, horizontal velocity, attitude, and the like as constraint conditions, and in the embodiment of the present invention, position information of longitude and latitude is selected as an example.
Further, the step 2 specifically comprises:
step 2.1: constructing a factor graph model by using the measurement information in the step 1, and adding an adaptive method to the factor graph to modify an error model for optimization;
step 2.2: and performing coarse loop detection by using the state variable at the current moment, and searching for the KD tree by using the search strategy. Searching k points closest to the matching point, calculating the time interval of the k points relative to the matching point, if the time interval is greater than a preset value and the interval distance is less than the preset value, completing loop detection, and entering step 3; otherwise, ending the loop detection and waiting for the input to be executed again at the next moment.
Further, the adaptive factor graph model in the step 2 is
Figure BDA0003316399570000021
In the formula X*In order for the state variables to be optimized,
Figure BDA0003316399570000022
the value of the variable x at which the objective function f (x) takes a minimum,
Figure BDA0003316399570000023
is the sum of the objective functions at time 1 to i,
Figure BDA0003316399570000024
the mahalanobis norm of the objective function f (x),
Figure BDA0003316399570000025
as a function of the weight, hk(Xi) For the k-th sensor state prediction at time i, zi,kFor the measurement of the k-th sensor at time i, the residual error of each sensor is determined
Figure BDA0003316399570000026
Adjusting sensor factor node weights. Concrete weight function
Figure BDA0003316399570000027
As follows
Figure BDA0003316399570000028
Where exp (x) is an exponential function,
Figure BDA0003316399570000029
indicates the residual value of the sensor, r0Representing the residual threshold of the sensor, the higher the positioning accuracy of the sensor, r0The lower the value, i.e. the smaller the weight change. Gamma is a parameter representing the positioning precision of each sensor, and determines the speed of weight change.
Further, the step 3 specifically includes:
step 3.1: carrying out PointNet training by using the laser radar point cloud data obtained in the step 1 to obtain a network suitable for a preset environment;
step 3.2: performing feature extraction on the two matching points obtained in the step 2 by using a PointNet network to obtain m classification features corresponding to the point cloud picture;
step 3.3: calculating Hamming distances according to the m classification features to judge similarity, finishing accurate loop detection if the similarity reaches a preset value, and entering next loop correction; otherwise, ending the loop detection and waiting for the input to be executed again at the next moment.
Further, the PointNet method in step 3 is specifically a point cloud processing network, which can directly input unordered point cloud for processing and learn the global characteristics of point cloud data, and the specific steps are as follows:
step 3.1.1: an input transformation for aligning the input point cloud data using a Spatial Transformation Network (STN);
step 3.1.2: point cloud processing, namely processing point cloud data by using a multilayer perceptron (MLP), and raising the dimension of the 3-dimensional data to 64-dimensional data to extract point cloud characteristics;
step 3.1.3: feature transformation, using STN to align input features;
step 3.1.4: performing feature processing, namely processing feature data by using MLP (Multi-level processing), and increasing the dimension of 64-dimensional data to 1024 dimensions;
step 3.1.5: processing a symmetric function, and fusing the features by utilizing maximum pooling to generate 1 x 1024 dimensional global features;
step 3.1.6: and (4) feature classification, namely learning by using MLP (Multi-level Linear Power) aiming at the input 1024-dimensional point cloud features, and outputting m classification features.
Further, the step 4 specifically includes:
step 4.1: calculating an iterative closest point method (ICP) by using the point cloud information of the laser radar, and acquiring pose transformation information and a corresponding error function between two moments;
step 4.2: and inputting pose transformation information and corresponding error functions between two moments into the factor graph, and establishing loop factor nodes to further realize global optimization.
Compared with the prior art, the invention has the beneficial effects that: the method has a good effect on the conventional application scene, can be applied to the fields of vehicle-mounted combined navigation under satellite refusal, vehicle-mounted multisource sensor speed measurement, attitude correction and the like, has high robustness and high reliability in the using process, and reduces the time complexity of loop detection by utilizing state constraint in the scene lacking reliable geometric positioning information, thereby realizing the function of loop detection of a factor graph introducing the state constraint.
Drawings
FIG. 1 is a flowchart of the overall algorithm of the present invention;
FIG. 2 is a diagram of a factor graph structure incorporating state constraints;
Detailed Description
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate an embodiment of the invention and, together with the description, serve to explain the invention and not to limit the invention.
The invention will be further described with reference to the accompanying drawings in which:
as shown in FIG. 1, the present invention discloses a method for detecting double loopback on a factor graph, which comprises the following specific steps:
step 1: acquiring measurement information of a satellite, inertial navigation and a laser radar sensor;
specifically, the sensor measurement information includes longitude and latitude information calculated by a satellite, specific force and angular velocity information output by an inertial navigation sensor, and point cloud information output by a laser radar. Before constructing the factor graph model, preprocessing the measurement information of different sensors is needed, including time matching of each factor node and IMU pre-integration. The factor node matching adopts the acquisition frequency of 10Hz of a laser radar as a reference, the IMU data acquisition frequency is 100Hz, the relative pose between two adjacent nodes is calculated through pre-integration to realize synchronization, the GPS data acquisition frequency is 1Hz, and the GPS data is matched by utilizing a timestamp.
Step 2: defining a system state variable as a variable node of a factor graph, defining sensor measurement information as the factor node, improving an error model by using an adaptive method, thereby constructing the adaptive factor graph model, acquiring the state variable at the current moment by using a factor graph optimization method, extracting state variable classification characteristics through statistical analysis, comparing the classification characteristics at each moment to perform coarse loop detection, and if the state variable passes the factor node, locking a key frame of accurate loop detection and continuing to execute the step 3; if not, executing the next time step 1;
aiming at the factor graph configuration and different types of sensors, the state variable at the current moment can select various information such as longitude and latitude, horizontal speed, attitude and the like as constraint conditions, and the embodiment of the invention selects the position information of the longitude and the latitude as an example.
The step 2 specifically comprises the following steps:
step 2.1: constructing a factor graph model by using the measurement information in the step 1, and adding an adaptive method to the factor graph to modify an error model for optimization;
step 2.2: and performing coarse loop detection by using the state variable at the current moment, and searching for the KD tree by using the search strategy. Searching k points closest to the matching point, calculating the time interval of the k points relative to the matching point, if the time interval is greater than a preset value and the interval distance is less than the preset value, completing loop detection, and entering step 3; otherwise, ending the loop detection and waiting for the input to be executed again at the next moment.
Specifically, in this embodiment, the adaptive factor graph model in step 2 is
Figure BDA0003316399570000041
In the formula X*In order for the state variables to be optimized,
Figure BDA0003316399570000042
the value of the variable x at which the objective function f (x) takes a minimum,
Figure BDA0003316399570000043
is the sum of the objective functions at time 1 to i,
Figure BDA0003316399570000044
the mahalanobis norm of the objective function f (x),
Figure BDA0003316399570000045
as a function of the weight, hk(Xi) For the k-th sensor state prediction at time i, zi,kFor the measurement of the k-th sensor at time i, the residual error of each sensor is determined
Figure BDA0003316399570000046
Adjusting sensor factor node weights. Concrete weight function
Figure BDA0003316399570000047
As follows
Figure BDA0003316399570000051
Where exp (x) is an exponential function,
Figure BDA0003316399570000052
indicates the residual value of the sensor, r0Representing the residual threshold of the sensor, the higher the positioning accuracy of the sensor, r0The lower the value, i.e. the smaller the weight change. Gamma is a parameter representing the positioning precision of each sensor, and determines the speed of weight change. In the invention example, an inertial navigation system adopted by a vehicle-mounted integrated navigation experiment is an Xsens Mti-10 series sensor, and the gyro constant drift is 18 degrees/h. The laser radar sensor adopts HDL-32E and real-time point cloud matchingThe error can be as low as 0.2cm, and the effective range is 80 m. The GPS receiver adopts a common u-blox M8T model, and the satellite positioning accuracy is low due to urban multipath effect, and the average positioning accuracy is 15-20M. When the self-adaptive weight is designed, the self-adaptive weight of the laser radar, the IMU and the satellite sensor is obtained to be 2,4 and 8 after the performance of the reference sensor is adjusted.
In addition, the search strategy adopted in step 2 is KD tree lookup. Searching k points with the nearest distance relative to the matching point position, calculating the time interval of the k points relative to the matching point, finishing loop detection if the time interval is greater than a preset value and the interval distance is less than the preset value, and entering the next accurate loop detection; otherwise, ending the loop detection and waiting for the input to be executed again at the next moment.
And step 3: performing neural network training on the laser radar measurement information to obtain a network structure suitable for a preset scene, performing point cloud type feature extraction on the two key frame laser radar measurement information locked in the step 2 by using a network, obtaining classification features, constructing a similarity function between two frames to perform accurate loop detection, if the two key frame laser radar measurement information pass through the classification features, confirming that loop detection is completed at two moments, and continuing to execute the step 4; if not, executing the next time step 1;
specifically, the step 3 comprises the following steps:
step 3.1: carrying out PointNet training by using the laser radar point cloud data obtained in the step 1 to obtain a network suitable for a preset environment;
step 3.2: performing feature extraction on the two matching points obtained in the step 2 by using a PointNet network to obtain m classification features corresponding to the point cloud picture;
step 3.3: calculating Hamming distances according to the m classification features to judge similarity, finishing accurate loop detection if the similarity reaches a preset value, and entering next loop correction; otherwise, ending the loop detection and waiting for the input to be executed again at the next moment.
In addition, in this embodiment, the PointNet method in step 3 is specifically a point cloud processing network, which can directly input unordered point cloud for processing and learn the global features of point cloud data, and specifically includes the following steps:
step 3.1.1: input transformation, namely aligning input point cloud data by using a Space Transformation Network (STN), generating an affine transformation matrix by the network to carry out standardized processing on changes of rotation, translation and the like of the point cloud, inputting the point cloud data into original point cloud data, and outputting the point cloud data into a 3 x 3 rotation matrix;
step 3.1.2: point cloud processing, namely processing point cloud data by using a multilayer perceptron (MLP), inputting n x 3 point cloud, processing each point by using 64 x 64 two-layer perceptron, learning and extracting information of each point, and raising the dimension of 3-dimensional data to 64-dimensional data to extract point cloud characteristics;
step 3.1.3: performing feature transformation, namely aligning input 64-dimensional point cloud features by adopting an STN (Standard template network);
step 3.1.4: performing feature processing, namely processing feature data by using MLP (Multi-level processing), and increasing the dimension of 64-dimensional data to 1024 dimensions;
step 3.1.5: processing a symmetric function, and fusing point cloud features of n x 1024 to generate features of 1 x 1024 dimensions by utilizing maximum pooling;
step 3.1.6: and (4) feature classification, namely learning by using MLP (Multi-level Linear prediction) aiming at the input 1024-dimensional point cloud features, outputting m classification features representing classification categories, wherein each category corresponds to a point cloud classification score.
And 4, step 4: and (3) acquiring state transformation information by using the laser radar measurement information obtained in the step (3) and adding a loop factor for correction.
Specifically, in this embodiment, step 4 specifically is:
step 4.1: the method comprises the steps that point cloud data between two moments are registered by inputting laser radar point cloud information through an iterative closest point method (ICP), and the optimal coordinate change, namely a rotation matrix R and a translation vector t, is iteratively calculated through a least square method, so that an error function is minimum, and therefore pose transformation information and a corresponding error function between the two moments are obtained;
step 4.2: and inputting pose transformation information and corresponding error functions between two moments into the factor graph, and establishing a loop factor node between two matching moments so as to realize global optimization.
Factor graph knots in the steps 2-4As shown in FIG. 2, the node with the prior factor added at the initial moment
Figure BDA0003316399570000061
Adding GPS unitary factor node connected with ith time variable node for ith time variable node
Figure BDA0003316399570000062
IMU (inertial measurement Unit) binary factor node connected with two adjacent time variable nodes
Figure BDA0003316399570000063
And Lidar binary factor nodes connected with two adjacent time variable nodes
Figure BDA0003316399570000064
The double loop detection method specifically comprises the following steps: firstly, coarse loop detection is carried out, namely whether the GPS loop requirement is met or not is detected by utilizing the GPS position state information
Figure BDA0003316399570000065
And then carrying out accurate loop detection, namely detecting whether the point cloud information of the laser radar meets the requirement of Lidar loop
Figure BDA0003316399570000066
If the double loop detection is passed, a loop factor f is addedbackCorrecting; otherwise, the loop detection fails, and the input is executed again at the next moment.
In addition, an embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium may store a program, and when the program is executed, the program includes some or all of the steps of any of the factor graph double-loopback detection methods described in the above method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a memory and includes several instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The embodiments of the present invention have been described in detail with reference to the drawings, but the present invention is not limited to the above embodiments, and various changes can be made within the knowledge of those skilled in the art without departing from the gist of the present invention.

Claims (8)

1. A factor graph double-loopback detection method is characterized by comprising the following steps:
acquiring measurement information of a satellite, inertial navigation and a laser radar sensor;
step two, defining system state variables as variable nodes of a factor graph, defining the measurement information of the laser radar sensor as the factor nodes, improving an error model by using a self-adaptive method, thereby constructing a self-adaptive factor graph model, obtaining the state variables at the current moment by using a factor graph optimization method, extracting state variable classification features through statistical analysis, and performing coarse loop detection by comparing the classification features at all moments, thereby locking a key frame of accurate loop detection;
step three, performing neural network training on the laser radar measurement information subjected to the coarse loop detection in the step two to obtain a network structure suitable for a preset scene, so that point cloud type feature extraction is performed on the laser radar measurement information of the key frame by using the neural network to obtain classification features, a similarity function between two frames is constructed to perform accurate loop detection, and if the point cloud type feature extraction passes the classification features, the loop detection is completed at two moments;
and step four, acquiring state transformation information by using the laser radar measurement information subjected to the accurate loop detection in the step three, and adding loop factors for correction.
2. The detection method according to claim 1, characterized in that: and in the second step, for the factor graph configuration and different types of sensors, the constraint conditions of the state variables at the current moment comprise one or more of longitude and latitude, horizontal speed and posture.
3. The detection method according to claim 1, characterized in that: the second step comprises the following steps:
constructing a factor graph model by using the measurement information in the step one, and adding an adaptive method to the factor graph to correct an error model for optimization;
carrying out coarse loop detection by using the state variable at the current moment, and searching for a KD tree by using a search strategy; and searching k points which are closest to the matching point, calculating the time interval of the k points relative to the matching point, and finishing loop detection if the time interval is greater than a preset value and the interval distance is less than the preset value.
4. The detection method according to claim 3, wherein the adaptive factor graph model in the second step is
Figure FDA0003316399560000011
In the formula X*In order for the state variables to be optimized,
Figure FDA0003316399560000012
the value of the variable x at which the objective function f (x) takes a minimum,
Figure FDA0003316399560000013
is the sum of the objective functions at time 1 to i,
Figure FDA0003316399560000014
the mahalanobis norm of the objective function f (x),
Figure FDA0003316399560000015
as a function of the weight, hk(Xi) For the k-th sensor state prediction at time i, zi,kFor the measurement of the k-th sensor at time i, the residual error of each sensor is determined
Figure FDA0003316399560000016
Adjusting sensor factor node weights.
5. The detection method according to claim 4, characterized in that: the weight function
Figure FDA0003316399560000017
As follows
Figure FDA0003316399560000021
Where exp (x) is an exponential function,
Figure FDA0003316399560000022
indicates the residual value of the sensor, r0Representing sensor residualThreshold, the higher the positioning accuracy of the sensor, r0The lower the value, i.e. the smaller the weight variation, γ is a parameter characterizing the positioning accuracy of each sensor.
6. The detection method according to claim 1, wherein the third step comprises:
carrying out PointNet training by using the laser radar point cloud data obtained in the first step to obtain a network suitable for a preset environment; performing feature extraction on the two matching points obtained in the step three by using a PointNet network to obtain m classification features corresponding to the point cloud picture;
calculating Hamming distances according to the m classification features to judge similarity, finishing accurate loop detection if the similarity reaches a preset value, and entering next loop correction; otherwise, ending the loop detection and waiting for the input to be executed again at the next moment.
7. The detection method according to claim 1, wherein the fourth step comprises:
calculating an iterative closest point method by using laser radar point cloud information to obtain pose transformation information and a corresponding error function between two moments;
and inputting pose transformation information and corresponding error functions between two moments into the factor graph, and establishing loop factor nodes to further realize global optimization.
8. A computer-readable storage medium, comprising: the computer-readable storage medium stores a computer program, wherein the computer program, when executed by a processor, implements the steps of a method for factor graph double loopback detection as claimed in any one of claims 1 to 7.
CN202111232146.5A 2021-10-22 2021-10-22 Factor graph double-loop detection method Pending CN114046790A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111232146.5A CN114046790A (en) 2021-10-22 2021-10-22 Factor graph double-loop detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111232146.5A CN114046790A (en) 2021-10-22 2021-10-22 Factor graph double-loop detection method

Publications (1)

Publication Number Publication Date
CN114046790A true CN114046790A (en) 2022-02-15

Family

ID=80205682

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111232146.5A Pending CN114046790A (en) 2021-10-22 2021-10-22 Factor graph double-loop detection method

Country Status (1)

Country Link
CN (1) CN114046790A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115326068A (en) * 2022-10-17 2022-11-11 北京理工大学 Design method and system for laser radar-inertial measurement unit fusion odometer
CN115685292A (en) * 2023-01-05 2023-02-03 中国人民解放军国防科技大学 Navigation method and device of multi-source fusion navigation system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190219401A1 (en) * 2018-01-12 2019-07-18 The Trustees Of The University Of Pennsylvania Probabilistic data association for simultaneous localization and mapping
CN110274588A (en) * 2019-06-19 2019-09-24 南京航空航天大学 Double-layer nested factor graph multi-source fusion air navigation aid based on unmanned plane cluster information
CN111325794A (en) * 2020-02-23 2020-06-23 哈尔滨工业大学 Visual simultaneous localization and map construction method based on depth convolution self-encoder
CN111337020A (en) * 2020-03-06 2020-06-26 兰州交通大学 Factor graph fusion positioning method introducing robust estimation
CN112304307A (en) * 2020-09-15 2021-02-02 浙江大华技术股份有限公司 Positioning method and device based on multi-sensor fusion and storage medium
CN112907491A (en) * 2021-03-18 2021-06-04 中煤科工集团上海有限公司 Laser point cloud loopback detection method and system suitable for underground roadway
CN113432600A (en) * 2021-06-09 2021-09-24 北京科技大学 Robot instant positioning and map construction method and system based on multiple information sources

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190219401A1 (en) * 2018-01-12 2019-07-18 The Trustees Of The University Of Pennsylvania Probabilistic data association for simultaneous localization and mapping
CN110274588A (en) * 2019-06-19 2019-09-24 南京航空航天大学 Double-layer nested factor graph multi-source fusion air navigation aid based on unmanned plane cluster information
CN111325794A (en) * 2020-02-23 2020-06-23 哈尔滨工业大学 Visual simultaneous localization and map construction method based on depth convolution self-encoder
CN111337020A (en) * 2020-03-06 2020-06-26 兰州交通大学 Factor graph fusion positioning method introducing robust estimation
CN112304307A (en) * 2020-09-15 2021-02-02 浙江大华技术股份有限公司 Positioning method and device based on multi-sensor fusion and storage medium
CN112907491A (en) * 2021-03-18 2021-06-04 中煤科工集团上海有限公司 Laser point cloud loopback detection method and system suitable for underground roadway
CN113432600A (en) * 2021-06-09 2021-09-24 北京科技大学 Robot instant positioning and map construction method and system based on multiple information sources

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
倪志康;厉茂海;林睿;孙立宁;刘仕琦;: "基于三维激光雷达与RTK融合的SLAM研究", 制造业自动化, no. 07, 25 July 2020 (2020-07-25) *
周雅婧 等: "基于因子图的组合导航算法研究", 《第十一届中国卫星导航年会论文集——S10 PNT体系与多源融合导航》, 23 November 2020 (2020-11-23) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115326068A (en) * 2022-10-17 2022-11-11 北京理工大学 Design method and system for laser radar-inertial measurement unit fusion odometer
CN115685292A (en) * 2023-01-05 2023-02-03 中国人民解放军国防科技大学 Navigation method and device of multi-source fusion navigation system

Similar Documents

Publication Publication Date Title
Clark et al. Vinet: Visual-inertial odometry as a sequence-to-sequence learning problem
Yin et al. 3d lidar-based global localization using siamese neural network
CN109682382B (en) Global fusion positioning method based on self-adaptive Monte Carlo and feature matching
CN110717927A (en) Indoor robot motion estimation method based on deep learning and visual inertial fusion
CN111693047A (en) Visual navigation method for micro unmanned aerial vehicle in high-dynamic scene
CN113313763B (en) Monocular camera pose optimization method and device based on neural network
CN110631588B (en) Unmanned aerial vehicle visual navigation positioning method based on RBF network
CN114046790A (en) Factor graph double-loop detection method
US11830218B2 (en) Visual-inertial localisation in an existing map
Yin et al. Pse-match: A viewpoint-free place recognition method with parallel semantic embedding
Wen et al. CAE-RLSM: Consistent and efficient redundant line segment merging for online feature map building
CN113532438B (en) Improved ICCP terrain matching method under large initial positioning error
Goldman et al. Robust epipolar geometry estimation using noisy pose priors
CN113112547A (en) Robot, repositioning method thereof, positioning device and storage medium
CN117053779A (en) Tightly coupled laser SLAM method and device based on redundant key frame removal
Zhou et al. Place recognition and navigation of outdoor mobile robots based on random Forest learning with a 3D LiDAR
Ali et al. A life-long SLAM approach using adaptable local maps based on rasterized LIDAR images
Morell et al. 3d maps representation using gng
CN115952248A (en) Pose processing method, device, equipment, medium and product of terminal equipment
Le Barz et al. Absolute geo-localization thanks to Hidden Markov Model and exemplar-based metric learning
Roznovjak Feature reprojection as image alignment score in GNSS-free UAV localization
Ravindranath et al. 3D-3D Self-Calibration of Sensors Using Point Cloud Data
Mäkelä Deep Feature UAV Localization in Urban Areas and Agricultural Fields and Forests
Mukherjee Design of Real Time System for SLAM and Semantic Mapping
Ling et al. Benchmarking Classical and Learning-Based Multibeam Point Cloud Registration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination