CN117451033A - Synchronous positioning and map construction method, device, terminal and medium - Google Patents

Synchronous positioning and map construction method, device, terminal and medium Download PDF

Info

Publication number
CN117451033A
CN117451033A CN202311764930.XA CN202311764930A CN117451033A CN 117451033 A CN117451033 A CN 117451033A CN 202311764930 A CN202311764930 A CN 202311764930A CN 117451033 A CN117451033 A CN 117451033A
Authority
CN
China
Prior art keywords
point cloud
data
cloud data
laser radar
loop
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311764930.XA
Other languages
Chinese (zh)
Other versions
CN117451033B (en
Inventor
孙国玺
王智宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Petrochemical Technology
Original Assignee
Guangdong University of Petrochemical Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Petrochemical Technology filed Critical Guangdong University of Petrochemical Technology
Priority to CN202311764930.XA priority Critical patent/CN117451033B/en
Publication of CN117451033A publication Critical patent/CN117451033A/en
Application granted granted Critical
Publication of CN117451033B publication Critical patent/CN117451033B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3841Data obtained from two or more sources, e.g. probe vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1652Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a synchronous positioning and map construction method, a device, a terminal and a medium, wherein the method comprises the steps of obtaining IMU data and laser radar point cloud data of a robot, and carrying out pre-integration processing on the IMU data to obtain odometer data of the IMU; performing ground fitting on the point cloud data of the laser radar, extracting no characteristic points on the fitted ground to obtain odometer data of the laser radar, and improving the selection of a matching loop in loop detection by adopting a strategy based on a global descriptor to reduce the situation of error loop; optimizing the pose in the odometer data of the laser radar to obtain point cloud matching data of loop detection; and constructing a factor graph optimization model, and performing graph optimization on the global pose of the robot to construct a point cloud map so as to obtain a patrol map of the robot. Therefore, the embodiment of the invention can reduce the calculated amount of algorithm operation on the premise of ensuring the positioning accuracy.

Description

Synchronous positioning and map construction method, device, terminal and medium
Technical Field
The present invention relates to the field of synchronous positioning technologies, and in particular, to a method, an apparatus, a terminal, and a medium for synchronous positioning and map construction.
Background
The existing synchronous positioning and map construction method based on the laser radar is divided into: two types of methods are optimized based on filters and based on graphs. The method based on the filter is derived from Bayesian estimation theory, and the current state is considered to be related to the state at the last moment only; the method based on the graph optimization considers the state of all the pose and the environment observation information in the moving process, and the pose and the constraint of the robot are represented by the graph formed by the nodes and the edges.
And feature matching based on a graph optimization method is carried out, feature point extraction is carried out on point cloud data after motion distortion correction, and motion between two frames of point cloud data is obtained through matching. When the feature points are extracted, the ground is not processed by the traditional method, so that feature extraction is performed on the ground point cloud data during feature extraction, the ground is used as a plane, and a large amount of computing resources are consumed during feature extraction and point cloud matching.
In addition, loop detection is used as an important part of a synchronous positioning and map construction method, and a history scene is identified to prompt closed loop, so that accumulated errors are corrected, and a consistent map is generated. In order to ensure the accuracy of loop detection, the prior art determines the selection of a specific loop frame by setting a threshold value on the selection of the loop frame through an empirical value, has no robustness to a transformation environment, is easy to generate the condition of miss-selection or multiple selections, and wastes calculation resources.
Disclosure of Invention
The invention provides a synchronous positioning and map construction method, a synchronous positioning and map construction device, a terminal and a medium, which can improve the accuracy of map construction and reduce the calculation resources of map construction.
In order to achieve the above object, in a first aspect, an embodiment of the present invention provides a method for synchronizing positioning and map construction, including:
acquiring IMU data and laser radar point cloud data of a robot; pre-integrating the IMU data to obtain the odometry data of the IMU;
performing motion distortion correction and point cloud feature extraction on the laser radar point cloud data to obtain feature point cloud data; performing plane fitting on the characteristic point cloud data, and performing point cloud matching on the characteristic point cloud data of a non-fitting plane through an initial value of the odometer data of the IMU to obtain the odometer data of the laser radar;
judging whether loop exists in the odometer data of the laser radar, if so, performing loop detection, executing point cloud matching, and optimizing the pose in the odometer data of the laser radar to obtain point cloud matching data of loop detection;
according to IMU constraint, odometer data of the laser radar and point cloud matching data detected by the loop, constructing a factor graph optimization model, and performing graph optimization on the global pose of the robot; and constructing the point cloud data after the map optimization into a point cloud map to obtain the inspection map of the robot.
As an improvement of the scheme, the laser radar point cloud data is subjected to motion distortion correction and point cloud feature extraction to obtain feature point cloud data; performing plane fitting on the characteristic point cloud data, performing point cloud matching on the characteristic point cloud data of a non-fitting plane through an initial value of the odometer data of the IMU to obtain the odometer data of the laser radar, wherein the method specifically comprises the following steps of:
calculating a rotation increment by adopting the IMU data, calculating a translation increment by adopting odometer data of the IMU, and performing motion compensation on the laser radar point cloud data to finish motion distortion correction;
performing point cloud feature extraction based on the point cloud data subjected to distortion correction to obtain feature point cloud data;
performing plane fitting on the characteristic point cloud data by adopting a RANSAC method, marking the characteristic point cloud data of a fitting plane, and not performing point cloud matching;
and carrying out point cloud matching on the characteristic point cloud data of the non-fitting plane through an initial value of the odometer data of the IMU to obtain the odometer data of the laser radar.
As an improvement of the above scheme, the determining whether the loop exists in the odometer data of the laser radar, if yes, performing loop detection, performing point cloud matching, and optimizing the pose in the odometer data of the laser radar to obtain the point cloud matching data of the loop detection specifically includes:
when the robot is detected to move to a repeated position, acquiring a target key frame of current laser radar point cloud data, searching a distance key frame meeting a distance threshold value in a KDTree of a key frame sequence of the laser radar point cloud data, and extracting point cloud data of the distance key frame;
judging whether the distance key frame meets a time threshold value, if so, taking out target point cloud data of the target key frame and candidate point cloud data of five key frames near the candidate loop frame;
converting the target point cloud data and the candidate point cloud data into a target descriptor and a candidate descriptor, and calculating the Hamming distance between the target descriptor and the candidate descriptor;
if the Hamming distance accords with the Hamming distance threshold, loop-back detection is carried out on the odometer data of the laser radar, point cloud matching is executed, the pose in the odometer data of the laser radar is optimized, and point cloud matching data of loop-back detection are obtained.
As an improvement of the above solution, the converting the target point cloud data and the candidate point cloud data into a target descriptor and a candidate descriptor, and calculating a hamming distance between the target descriptor and the candidate descriptor specifically includes:
generating a discretized aerial view from the target point cloud data and the candidate point cloud data to obtain a target description sub-image and a candidate description sub-image;
performing Fourier transformation on the target description sub-image and the candidate description sub-image, wherein the target description sub-image and the candidate description sub-image are subjected to transformation;
and extracting binary features of the transformed target description sub-image and the candidate description sub-image by using a Log-Gabor filter to obtain a target binary feature image and a candidate binary feature image, and calculating the Hamming distance between the target binary feature image and the candidate binary feature image.
As an improvement of the above solution, if the hamming distance meets a hamming distance threshold, loop-back detection is performed on the odometer data of the laser radar, and point cloud matching is performed, so as to optimize pose in the odometer data of the laser radar and obtain point cloud matching data of the loop-back detection, which specifically includes:
if the Hamming distance accords with the Hamming distance threshold, loop-back detection is carried out on the odometer data of the laser radar;
transforming the target point cloud data according to an iterative closest point method to obtain temporary transformed point cloud data; comparing the temporary transformation point cloud data with the point cloud data of the loop, and finding out the nearest neighbor point of each point cloud in the point cloud data of the loop;
saving point cloud data of the nearest neighbor point by KD-tree, calculating the mass centers of the point cloud data of the target point and the point cloud data of the loop through SVD decomposition, and converting the point cloud data of the target point and the point cloud data of the loop into a coordinate system of the mass centers;
calculating centroid matrixes of the point cloud data of the target point and the point cloud data of the loop back to perform SVD decomposition, obtaining optimal rotation and optimal translation, and ending the iteration after iteration of the maximum iteration times or obtaining the optimal rotation and translation smaller than a certain value to obtain optimal pose transformation;
and optimizing the pose in the odometer data of the laser radar to obtain point cloud matching data of loop detection.
As an improvement of the above scheme, the expression of the centroid matrix is:
wherein H is the centroid matrix;for the i-th centroid of the target point cloud data,/->,/>A centroid of the target point cloud data; />,/>For the centroid of the point cloud data of the ith loop, +.>The centroid of the point cloud data of the loop; t is the index of the matrix transpose.
As an improvement of the scheme, the characteristic point cloud data of the non-fitting plane is subjected to point cloud matching through the initial value of the odometer data of the IMU to obtain the odometer data of the laser radar, and the method specifically comprises the following steps:
extracting features of a key frame based on feature point cloud data of a non-fitting plane, wherein the features of the key frame comprise corner features and surface point features;
when a new key frame arrives, constructing a local feature map by utilizing feature sets of all key frames before the new key frame and pose sets corresponding to the feature sets;
matching the features of the new key frame with the local feature map through the initial value of the odometer data of the IMU; establishing a point-line distance constraint and a point-plane distance constraint, and minimizing all distance constraints to optimize a new pose, so as to obtain odometer data of the laser radar;
the expression of the local feature map is as follows:
in the method, in the process of the invention,for the ith local feature map, +.>And->An ith local edge feature map and an ith local plane feature map in the world coordinate system, respectively.
In a second aspect, an embodiment of the present invention provides a synchronous positioning and map building apparatus, including:
the data acquisition module is used for acquiring the IMU data and the laser radar point cloud data of the robot; pre-integrating the IMU data to obtain the odometry data of the IMU;
the feature extraction module is used for carrying out motion distortion correction and point cloud feature extraction on the laser radar point cloud data to obtain feature point cloud data; performing plane fitting on the characteristic point cloud data, and performing point cloud matching on the characteristic point cloud data of a non-fitting plane through an initial value of the odometer data of the IMU to obtain the odometer data of the laser radar;
the loop detection module is used for judging whether loop exists in the odometer data of the laser radar, if yes, loop detection is carried out, point cloud matching is carried out, and the pose in the odometer data of the laser radar is optimized, so that point cloud matching data of loop detection is obtained;
the map construction module is used for constructing a factor graph optimization model according to IMU constraint, the odometer data of the laser radar and the point cloud matching data detected by the loop, and performing graph optimization on the global pose of the robot; and constructing the point cloud data after the map optimization into a point cloud map to obtain the inspection map of the robot.
In a third aspect, an embodiment of the present invention correspondingly provides a terminal device, including a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, where the processor implements the above-mentioned synchronous positioning and map construction method when executing the computer program.
In addition, the embodiment of the invention also provides a computer readable storage medium, which comprises a stored computer program, wherein the computer program is used for controlling equipment where the computer readable storage medium is located to execute the synchronous positioning and map construction method.
Compared with the prior art, the method, the device, the terminal and the medium for synchronously positioning and constructing the map are disclosed by the embodiment of the invention, and the IMU data and the laser radar point cloud data of the robot are obtained; pre-integrating the IMU data to obtain the odometry data of the IMU; performing motion distortion correction and point cloud feature extraction on the laser radar point cloud data to obtain feature point cloud data; performing plane fitting on the characteristic point cloud data, and performing point cloud matching on the characteristic point cloud data of a non-fitting plane through an initial value of the odometer data of the IMU to obtain the odometer data of the laser radar; judging whether loop exists in the odometer data of the laser radar, if so, performing loop detection, executing point cloud matching, and optimizing the pose in the odometer data of the laser radar to obtain point cloud matching data of loop detection; according to IMU constraint, odometer data of the laser radar and point cloud matching data detected by the loop, constructing a factor graph optimization model, and performing graph optimization on the global pose of the robot; and constructing the point cloud data after the map optimization into a point cloud map to obtain the inspection map of the robot. Therefore, the embodiment of the invention can eliminate the ground point cloud data when acquiring the laser radar odometer data, thereby reducing the consumption of calculation resources; in loop candidate point selection of loop detection, the selected loop is required to meet a Hamming distance threshold besides meeting a time distance threshold, and the number of loop frames is reduced, so that the situations of loop error selection and repeated selection are reduced, the calculation resources required by operation are reduced, and a reference is provided for low-calculation-power platform deployment.
Drawings
FIG. 1 is a schematic flow chart of a method for synchronously positioning and constructing a map according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a synchronous positioning and map building device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that the terms "comprises" and "comprising," along with any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Referring to fig. 1, fig. 1 is a flowchart of a synchronous positioning and map construction method according to an embodiment of the present invention, where the synchronous positioning and map construction method includes steps S11 to S14:
s11: acquiring IMU data and laser radar point cloud data of a robot; pre-integrating the IMU data to obtain the odometry data of the IMU;
s12: performing motion distortion correction and point cloud feature extraction on the laser radar point cloud data to obtain feature point cloud data; performing plane fitting on the characteristic point cloud data, and performing point cloud matching on the characteristic point cloud data of a non-fitting plane through an initial value of the odometer data of the IMU to obtain the odometer data of the laser radar;
s13: judging whether loop exists in the odometer data of the laser radar, if so, performing loop detection, executing point cloud matching, and optimizing the pose in the odometer data of the laser radar to obtain point cloud matching data of loop detection;
s14: according to IMU constraint, odometer data of the laser radar and point cloud matching data detected by the loop, constructing a factor graph optimization model, and performing graph optimization on the global pose of the robot; and constructing the point cloud data after the map optimization into a point cloud map to obtain the inspection map of the robot.
It should be noted that, the inspection map may be used as a navigation map for the inspection robot to perform the inspection task.
Specifically, the step S12 specifically includes:
calculating a rotation increment by adopting the IMU data, calculating a translation increment by adopting odometer data of the IMU, and performing motion compensation on the laser radar point cloud data to finish motion distortion correction;
performing point cloud feature extraction based on the point cloud data subjected to distortion correction to obtain feature point cloud data;
performing plane fitting on the characteristic point cloud data by adopting a RANSAC method, marking the characteristic point cloud data of a fitting plane, and not performing point cloud matching;
and carrying out point cloud matching on the characteristic point cloud data of the non-fitting plane through an initial value of the odometer data of the IMU to obtain the odometer data of the laser radar.
In the implementation, rotation increment is calculated by using IMU data at the beginning moment of the current frame, translation increment is calculated by using IMU mileage count data, and the position of the laser point cloud at each moment is transformed to the first laser coordinate system, so that the point cloud data is subjected to motion compensation. The point cloud data after the motion distortion correction is fitted to the ground by using a RANSAC method, a better model is estimated through iteration so as to calculate an equation of the plane point, the point cloud data which accords with the equation of the plane point is marked, point cloud matching is not carried out, and the calculation force resource is saved.
Specifically, the step S13 specifically includes:
when the robot is detected to move to a repeated position, acquiring a target key frame of current laser radar point cloud data, searching a distance key frame meeting a distance threshold value in a KDTree of a key frame sequence of the laser radar point cloud data, and extracting point cloud data of the distance key frame;
judging whether the distance key frame meets a time threshold value, if so, taking out target point cloud data of the target key frame and candidate point cloud data of five key frames near the candidate loop frame;
converting the target point cloud data and the candidate point cloud data into a target descriptor and a candidate descriptor, and calculating the Hamming distance between the target descriptor and the candidate descriptor;
if the Hamming distance accords with the Hamming distance threshold, loop-back detection is carried out on the odometer data of the laser radar, point cloud matching is executed, the pose in the odometer data of the laser radar is optimized, and point cloud matching data of loop-back detection are obtained.
More specifically, the converting the target point cloud data and the candidate point cloud data into a target descriptor and a candidate descriptor, and calculating a hamming distance between the target descriptor and the candidate descriptor specifically includes:
generating a discretized aerial view from the target point cloud data and the candidate point cloud data to obtain a target description sub-image and a candidate description sub-image;
performing Fourier transformation on the target description sub-image and the candidate description sub-image, wherein the target description sub-image and the candidate description sub-image are subjected to transformation;
and extracting binary features of the transformed target description sub-image and the candidate description sub-image by using a Log-Gabor filter to obtain a target binary feature image and a candidate binary feature image, and calculating the Hamming distance between the target binary feature image and the candidate binary feature image.
After obtaining target point cloud data of the target key frame and candidate point cloud data of five key frames near the candidate loop frame, which meet a time threshold and a distance threshold, generating a discretized aerial view from the target point cloud data and the candidate point cloud data, specifically: an m×m square area is generated as the effective sensing area, and the laser radar is located at the center of the square. And taking the generated square area as a unit, and discretizing the target point cloud data and the candidate point cloud data into a bin annular aerial view of 80 (radial) x 360 (angular), so as to obtain a target description sub-image and a candidate description sub-image. The translation transformation results in a significant protrusion of the resulting target description sub-image and candidate description sub-images generated, and therefore, embodiments of the present invention use fourier transforms to estimate the translation between the target description sub-image and the candidate description sub-images. The Fourier transform has the advantages that rotation, scaling and translation can be estimated, the absolute internal structure of the point cloud is reserved by using the bin-divided point cloud data, and horizontal translation and vertical translation of the target description sub-image and the candidate description sub-image after the Fourier transform and slight change of pixel intensity caused by the rotation and translation of the point cloud are avoided.
Assume that only one displacement is different between the target description sub-image and the candidate description sub-image:
wherein,、/>representing two radar iris images differing by one displacement,>、/>representing the amount of translation on the X, Y axis, the fourier transform between the target and candidate description sub-images may be defined as:
the corresponding normalized cross-power spectrum is defined as:
wherein,representing conjugation, taking the inverse->This means +.>Only at +.>And is non-zero.
Binary features of the target and candidate description sub-images are then extracted using LoG-Gabor filters, which decompose the data in the regions of the target and candidate description sub-images into components that appear at different resolutions. Unlike conventional fourier transforms, loG-Gabor filters allow local frequency variations, allowing feature matching at the same location and resolution. The frequency response of a one-dimensional LoG-Gabor filter is as follows:
wherein,for frequency response, +.>For the current frequency +.>Gives the center frequency of the filter, +.>Affecting the bandwidth of the filter, +.>It is necessary to keep the constant.
In order to achieve optimal closed loop accuracy at lower operating costs, embodiments of the present invention use four LoG-Gabor filters. The convolution response of the four LoG-Gabor filters is converted into two values through simple threshold calculation, and the two values are superimposed in a large-scale binary feature map of each image. And obtaining a target binary feature map and a candidate binary feature map, calculating the Hamming distance between the target binary feature map and the candidate binary feature map, and if the Hamming distance threshold is not exceeded, considering that loop-back is successful, wherein loop-back exists in the odometer data of the laser radar.
More specifically, if the hamming distance meets a hamming distance threshold, loop-back exists in the odometer data of the laser radar, loop-back detection is performed, point cloud matching is performed, and the pose in the odometer data of the laser radar is optimized to obtain point cloud matching data of loop-back detection, which specifically includes:
if the Hamming distance accords with the Hamming distance threshold, loop-back detection is carried out on the odometer data of the laser radar;
transforming the target point cloud data according to an iterative closest point method to obtain temporary transformed point cloud data; comparing the temporary transformation point cloud data with the point cloud data of the loop, and finding out the nearest neighbor point of each point cloud in the point cloud data of the loop;
saving point cloud data of the nearest neighbor point by KD-tree, calculating the mass centers of the point cloud data of the target point and the point cloud data of the loop through SVD decomposition, and converting the point cloud data of the target point and the point cloud data of the loop into a coordinate system of the mass centers;
calculating centroid matrixes of the point cloud data of the target point and the point cloud data of the loop back to perform SVD decomposition, obtaining optimal rotation and optimal translation, and ending the iteration after iteration of the maximum iteration times or obtaining the optimal rotation and translation smaller than a certain value to obtain optimal pose transformation;
and optimizing the pose in the odometer data of the laser radar to obtain point cloud matching data of loop detection.
Specifically, the expression of the centroid matrix is:
wherein H is the centroid matrix;for the i-th centroid of the target point cloud data,/->,/>A centroid of the target point cloud data; />,/>For the centroid of the point cloud data of the ith loop, +.>The centroid of the point cloud data of the loop; t is the index of the matrix transpose.
It should be noted that, in the embodiment of the present invention, the iterative closest point method is used to perform point cloud matching on the target point cloud data and the point cloud data of the loop, so as to find the corresponding relationship between each point on the two pieces of point cloud data, thereby obtaining more accurate transformation. And (3) the iterative closest point method decomposes the point cloud matching into two parts for processing, searches for the closest point and searches for the optimal transformation. Searching the closest point, specifically: transforming the initial point cloud data to obtain temporary transformed point cloud data, and comparing the temporary transformed point cloud data with the sample point cloud data to find the nearest neighbor point of each point cloud in the initial point cloud data in the sample point cloud data. The KD-tree is used for storing the point cloud data of the nearest neighbor point and accelerating the search. Solving the optimal transformation is calculated by means of SVD decomposition, specifically, after calculating centroids of initial point cloud data and sample point cloud data, converting the initial point cloud data and the sample point cloud data, uniformly converting the initial point cloud data and the sample point cloud data into a centroid coordinate system, and calculating a centroid matrix H of the initial point cloud data and the sample point cloud data, wherein the definition of the centroid matrix H is as follows:
is provided with,/>Respectively is the initial pointCentroid of cloud data and sample point cloud data, let ∈>Let->SVD decomposing is carried out on the H matrix to obtain +.>Wherein->Is +.>Matrix of->Is +.>Is 0 except for the elements on the main diagonal, each element on the main diagonal being called singular value, +.>Is +.>Is a matrix of (a) in the matrix. />And->Are unitary matrices, i.e. satisfy +.>. The optimal rotation of the point cloud matching problem is +.>Optimal translation of +.>. And according to the obtained H matrix, obtaining optimal rotation and optimal translation, and ending the iteration after the iteration reaches the maximum iteration number or the optimal rotation and translation is smaller than a certain value, thereby obtaining more accurate pose transformation.
More specifically, the characteristic point cloud data of the non-fitting plane is subjected to point cloud matching through an initial value of the odometer data of the IMU to obtain the odometer data of the laser radar, and the method specifically comprises the following steps:
extracting features of a key frame based on feature point cloud data of a non-fitting plane, wherein the features of the key frame comprise corner features and surface point features;
when a new key frame arrives, constructing a local feature map by utilizing feature sets of all key frames before the new key frame and pose sets corresponding to the feature sets;
matching the features of the new key frame with the local feature map through the initial value of the odometer data of the IMU; establishing a point-line distance constraint and a point-plane distance constraint, and minimizing all distance constraints to optimize a new pose, so as to obtain odometer data of the laser radar;
the expression of the local feature map is as follows:
in the method, in the process of the invention,for the ith local feature map, +.>And->Respectively an ith local edge feature map and an ith office in a world coordinate systemAnd (5) a part plane characteristic map.
In the process of obtaining the odometer data of the laser radar, only the features of the key frames are used, and other frames are all discarded. Extracting features of the ith keyframeThe former is->Is a corner feature of feature extraction, namely an edge feature, the latter +.>Is a facial feature, i.e., a planar feature, of feature extraction. When new key frame->When arriving, feature set using previous n+1 key frames +.>And (2) pose->Constructing a local feature map->Wherein, the method comprises the steps of, wherein,
wherein,and->Is transformed into the world coordinate system +.>Corner features and face features of individual key frames, < ->And->Is the ith local edge feature map and the ith local plane feature map in the world coordinate system; />And->Is transformed into the world coordinate system +.>-corner features and face features of 1 key frame; />And->Is transformed into the world coordinate system +.>Corner features and face features of n key frames. After which match->And->And +.>And->Establishing a point-line distance constraint and a point-plane distance constraint, optimizing a new pose by minimizing all distance constraints>What is matched here with the new key frame is not a global point cloud but a combined point cloud of past n+1 frame key frames.
Fig. 2 is a schematic structural diagram of a synchronous positioning and mapping device according to an embodiment of the present invention, where the synchronous positioning and mapping device includes:
the data acquisition module 21 is used for acquiring IMU data and laser radar point cloud data of the robot; pre-integrating the IMU data to obtain the odometry data of the IMU;
the feature extraction module 22 is configured to perform motion distortion correction and point cloud feature extraction on the laser radar point cloud data to obtain feature point cloud data; performing plane fitting on the characteristic point cloud data, and performing point cloud matching on the characteristic point cloud data of a non-fitting plane through an initial value of the odometer data of the IMU to obtain the odometer data of the laser radar;
the loop detection module 23 is configured to determine whether loop exists in the odometer data of the laser radar, if yes, perform loop detection, perform point cloud matching, and optimize pose in the odometer data of the laser radar to obtain point cloud matching data of loop detection;
the map construction module 24 is configured to construct a factor graph optimization model according to IMU constraints, odometry data of the laser radar and point cloud matching data detected by the loop, and perform graph optimization on a global pose of the robot; and constructing the point cloud data after the map optimization into a point cloud map to obtain the inspection map of the robot.
The synchronous positioning and map construction device provided by the embodiment of the invention can realize all the processes of the synchronous positioning and map construction method of the embodiment, and the functions and the realized technical effects of each module in the device are respectively corresponding to the same as those of the synchronous positioning and map construction method of the embodiment, and are not repeated here.
The embodiment of the invention correspondingly provides a terminal device, which comprises: a processor, a memory, and a computer program stored in the memory and executable on the processor. The steps in the above embodiments of the synchronous positioning and map construction method are implemented when the processor executes the computer program. Or the processor executes the computer program to realize the functions of the modules in the embodiment of the synchronous positioning and mapping device.
The terminal equipment can be computing equipment such as a desktop computer, a notebook computer, a palm computer, a cloud server and the like. The terminal device may include, but is not limited to, a processor, a memory. It will be appreciated by those skilled in the art that the schematic diagram is merely an example of a terminal device and does not constitute a limitation of the terminal device, and may include more or less components than illustrated, or may combine certain components, or different components, e.g., the terminal device may further include an input-output device, a network access device, a bus, etc.
The processor may be a central processing unit, but also other general purpose processors, digital signal processors, application specific integrated circuits, field programmable gate arrays or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, which is a control center of the terminal device, and which connects various parts of the entire terminal device using various interfaces and lines.
The memory may be used to store the computer program and/or module, and the processor may implement various functions of the terminal device by running or executing the computer program and/or module stored in the memory and invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, smart memory card, at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
It should be noted that the above-described apparatus embodiments are merely illustrative, and the units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The embodiment of the invention also provides a computer readable storage medium, which comprises a stored computer program, wherein the computer program is used for controlling equipment where the computer readable storage medium is located to execute the synchronous positioning and map construction method according to the embodiment.
In summary, the method, the device, the terminal and the medium for synchronously positioning and constructing the map disclosed by the embodiment of the invention acquire the IMU data and the laser radar point cloud data of the robot; pre-integrating the IMU data to obtain the odometry data of the IMU; performing motion distortion correction and point cloud feature extraction on the laser radar point cloud data to obtain feature point cloud data; performing plane fitting on the characteristic point cloud data, and performing point cloud matching on the characteristic point cloud data of a non-fitting plane through an initial value of the odometer data of the IMU to obtain the odometer data of the laser radar; judging whether loop exists in the odometer data of the laser radar, if so, performing loop detection, executing point cloud matching, and optimizing the pose in the odometer data of the laser radar to obtain point cloud matching data of loop detection; according to IMU constraint, odometer data of the laser radar and point cloud matching data detected by the loop, constructing a factor graph optimization model, and performing graph optimization on the global pose of the robot; and constructing the point cloud data after the map optimization into a point cloud map to obtain the inspection map of the robot. Therefore, the embodiment of the invention can eliminate the ground point cloud data when acquiring the laser radar odometer data, thereby reducing the consumption of calculation resources; in loop candidate point selection of loop detection, the selected loop is required to meet a Hamming distance threshold besides meeting a time distance threshold, and the number of loop frames is reduced, so that the situations of loop error selection and repeated selection are reduced, the calculation resources required by operation are reduced, and a reference is provided for low-calculation-power platform deployment.
While the foregoing is directed to the preferred embodiments of the present invention, it will be appreciated by those skilled in the art that changes and modifications may be made without departing from the principles of the invention, such changes and modifications are also intended to be within the scope of the invention.

Claims (10)

1. The synchronous positioning and map construction method is characterized by comprising the following steps:
acquiring IMU data and laser radar point cloud data of a robot; pre-integrating the IMU data to obtain the odometry data of the IMU;
performing motion distortion correction and point cloud feature extraction on the laser radar point cloud data to obtain feature point cloud data; performing plane fitting on the characteristic point cloud data, and performing point cloud matching on the characteristic point cloud data of a non-fitting plane through an initial value of the odometer data of the IMU to obtain the odometer data of the laser radar;
judging whether loop exists in the odometer data of the laser radar, if so, performing loop detection, executing point cloud matching, and optimizing the pose in the odometer data of the laser radar to obtain point cloud matching data of loop detection;
according to IMU constraint, odometer data of the laser radar and point cloud matching data detected by the loop, constructing a factor graph optimization model, and performing graph optimization on the global pose of the robot; and constructing the point cloud data after the map optimization into a point cloud map to obtain the inspection map of the robot.
2. The method for synchronously positioning and constructing a map according to claim 1, wherein the laser radar point cloud data is subjected to motion distortion correction and point cloud feature extraction to obtain feature point cloud data; performing plane fitting on the characteristic point cloud data, performing point cloud matching on the characteristic point cloud data of a non-fitting plane through an initial value of the odometer data of the IMU to obtain the odometer data of the laser radar, wherein the method specifically comprises the following steps of:
calculating a rotation increment by adopting the IMU data, calculating a translation increment by adopting odometer data of the IMU, and performing motion compensation on the laser radar point cloud data to finish motion distortion correction;
performing point cloud feature extraction based on the point cloud data subjected to distortion correction to obtain feature point cloud data;
performing plane fitting on the characteristic point cloud data by adopting a RANSAC method, marking the characteristic point cloud data of a fitting plane, and not performing point cloud matching;
and carrying out point cloud matching on the characteristic point cloud data of the non-fitting plane through an initial value of the odometer data of the IMU to obtain the odometer data of the laser radar.
3. The method for synchronously positioning and mapping according to claim 1, wherein the determining whether loop exists in the odometer data of the laser radar, if yes, performing loop detection, performing point cloud matching, and optimizing pose in the odometer data of the laser radar to obtain point cloud matching data of loop detection, specifically comprises:
when the robot is detected to move to a repeated position, acquiring a target key frame of current laser radar point cloud data, searching a distance key frame meeting a distance threshold value in a KDTree of a key frame sequence of the laser radar point cloud data, and extracting point cloud data of the distance key frame;
judging whether the distance key frame meets a time threshold value, if so, taking out target point cloud data of the target key frame and candidate point cloud data of five key frames near the candidate loop frame;
converting the target point cloud data and the candidate point cloud data into a target descriptor and a candidate descriptor, and calculating the Hamming distance between the target descriptor and the candidate descriptor;
if the Hamming distance accords with the Hamming distance threshold, loop-back detection is carried out on the odometer data of the laser radar, point cloud matching is executed, the pose in the odometer data of the laser radar is optimized, and point cloud matching data of loop-back detection are obtained.
4. The method for synchronously positioning and mapping according to claim 3, wherein the converting the target point cloud data and the candidate point cloud data into target descriptors and candidate descriptors, and calculating a hamming distance between the target descriptors and the candidate descriptors specifically comprises:
generating a discretized aerial view from the target point cloud data and the candidate point cloud data to obtain a target description sub-image and a candidate description sub-image;
performing Fourier transformation on the target description sub-image and the candidate description sub-image, wherein the target description sub-image and the candidate description sub-image are subjected to transformation;
and extracting binary features of the transformed target description sub-image and the candidate description sub-image by using a Log-Gabor filter to obtain a target binary feature image and a candidate binary feature image, and calculating the Hamming distance between the target binary feature image and the candidate binary feature image.
5. The method for synchronously positioning and mapping according to claim 3, wherein if the hamming distance meets a hamming distance threshold, loop-back detection is performed on the odometer data of the laser radar, point cloud matching is performed, and a pose in the odometer data of the laser radar is optimized to obtain point cloud matching data of loop-back detection, and the method specifically comprises:
if the Hamming distance accords with the Hamming distance threshold, loop-back detection is carried out on the odometer data of the laser radar;
transforming the target point cloud data according to an iterative closest point method to obtain temporary transformed point cloud data; comparing the temporary transformation point cloud data with the point cloud data of the loop, and finding out the nearest neighbor point of each point cloud in the point cloud data of the loop;
saving point cloud data of the nearest neighbor point by KD-tree, calculating the mass centers of the point cloud data of the target point and the point cloud data of the loop through SVD decomposition, and converting the point cloud data of the target point and the point cloud data of the loop into a coordinate system of the mass centers;
calculating centroid matrixes of the point cloud data of the target point and the point cloud data of the loop back to perform SVD decomposition, obtaining optimal rotation and optimal translation, and ending the iteration after iteration of the maximum iteration times or obtaining the optimal rotation and translation smaller than a certain value to obtain optimal pose transformation;
and optimizing the pose in the odometer data of the laser radar to obtain point cloud matching data of loop detection.
6. The synchronized locating and mapping method of claim 5, wherein said centroid matrix is expressed as:
wherein H is the centroid matrix;for the i-th centroid of the target point cloud data,/->,/>A centroid of the target point cloud data; />,/>For the centroid of the point cloud data of the ith loop, +.>The centroid of the point cloud data of the loop; t is the index of the matrix transpose.
7. The method for synchronously positioning and constructing a map according to claim 2, wherein the characteristic point cloud data of the non-fitting plane is subjected to point cloud matching through an initial value of odometer data of the IMU to obtain the odometer data of the laser radar, specifically comprising:
extracting features of a key frame based on feature point cloud data of a non-fitting plane, wherein the features of the key frame comprise corner features and surface point features;
when a new key frame arrives, constructing a local feature map by utilizing feature sets of all key frames before the new key frame and pose sets corresponding to the feature sets;
matching the features of the new key frame with the local feature map through the initial value of the odometer data of the IMU; establishing a point-line distance constraint and a point-plane distance constraint, and minimizing all distance constraints to optimize a new pose, so as to obtain odometer data of the laser radar;
the expression of the local feature map is as follows:
in the method, in the process of the invention,for the ith local feature map, +.>And->An ith local edge feature map and an ith local plane feature map in the world coordinate system, respectively.
8. A synchronous positioning and map construction apparatus, comprising:
the data acquisition module is used for acquiring the IMU data and the laser radar point cloud data of the robot; pre-integrating the IMU data to obtain the odometry data of the IMU;
the feature extraction module is used for carrying out motion distortion correction and point cloud feature extraction on the laser radar point cloud data to obtain feature point cloud data; performing plane fitting on the characteristic point cloud data, and performing point cloud matching on the characteristic point cloud data of a non-fitting plane through an initial value of the odometer data of the IMU to obtain the odometer data of the laser radar;
the loop detection module is used for judging whether loop exists in the odometer data of the laser radar, if yes, loop detection is carried out, point cloud matching is carried out, and the pose in the odometer data of the laser radar is optimized, so that point cloud matching data of loop detection is obtained;
the map construction module is used for constructing a factor graph optimization model according to IMU constraint, the odometer data of the laser radar and the point cloud matching data detected by the loop, and performing graph optimization on the global pose of the robot; and constructing the point cloud data after the map optimization into a point cloud map to obtain the inspection map of the robot.
9. A terminal device comprising a processor, a memory and a computer program stored in the memory and configured to be executed by the processor, the processor implementing the synchronized positioning and mapping method according to any one of claims 1-7 when the computer program is executed.
10. A computer readable storage medium, characterized in that the computer readable storage medium comprises a stored computer program, wherein the computer program, when run, controls a device in which the computer readable storage medium is located to perform the synchronized locating and mapping method according to any one of claims 1-7.
CN202311764930.XA 2023-12-21 2023-12-21 Synchronous positioning and map construction method, device, terminal and medium Active CN117451033B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311764930.XA CN117451033B (en) 2023-12-21 2023-12-21 Synchronous positioning and map construction method, device, terminal and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311764930.XA CN117451033B (en) 2023-12-21 2023-12-21 Synchronous positioning and map construction method, device, terminal and medium

Publications (2)

Publication Number Publication Date
CN117451033A true CN117451033A (en) 2024-01-26
CN117451033B CN117451033B (en) 2024-05-14

Family

ID=89583972

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311764930.XA Active CN117451033B (en) 2023-12-21 2023-12-21 Synchronous positioning and map construction method, device, terminal and medium

Country Status (1)

Country Link
CN (1) CN117451033B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106525047A (en) * 2016-10-28 2017-03-22 重庆交通大学 Unmanned aerial vehicle path planning method based on floyd algorithm
CN111207774A (en) * 2020-01-17 2020-05-29 山东大学 Method and system for laser-IMU external reference calibration
KR20210112106A (en) * 2020-03-04 2021-09-14 한국전자통신연구원 Method and apparatus for autonomous driving of mobile robot in orchard environment
CN114972668A (en) * 2022-05-30 2022-08-30 哈尔滨工业大学(深圳) Laser SLAM method and system based on height information
CN115371662A (en) * 2022-08-22 2022-11-22 北京化工大学 Static map construction method for removing dynamic objects based on probability grid
CN115930977A (en) * 2022-12-08 2023-04-07 北京踏歌智行科技有限公司 Method and system for positioning characteristic degradation scene, electronic equipment and readable storage medium
CN116381717A (en) * 2023-04-25 2023-07-04 广东石油化工学院 Unmanned aerial vehicle positioning device and positioning method based on Kalman filtering algorithm
CN116429116A (en) * 2023-04-20 2023-07-14 山东新一代信息产业技术研究院有限公司 Robot positioning method and equipment
CN219676293U (en) * 2023-04-25 2023-09-12 广东石油化工学院 Unmanned aerial vehicle positioner based on Kalman filtering algorithm
CN117218350A (en) * 2023-09-19 2023-12-12 中南林业科技大学 SLAM implementation method and system based on solid-state radar

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106525047A (en) * 2016-10-28 2017-03-22 重庆交通大学 Unmanned aerial vehicle path planning method based on floyd algorithm
CN111207774A (en) * 2020-01-17 2020-05-29 山东大学 Method and system for laser-IMU external reference calibration
KR20210112106A (en) * 2020-03-04 2021-09-14 한국전자통신연구원 Method and apparatus for autonomous driving of mobile robot in orchard environment
CN114972668A (en) * 2022-05-30 2022-08-30 哈尔滨工业大学(深圳) Laser SLAM method and system based on height information
CN115371662A (en) * 2022-08-22 2022-11-22 北京化工大学 Static map construction method for removing dynamic objects based on probability grid
CN115930977A (en) * 2022-12-08 2023-04-07 北京踏歌智行科技有限公司 Method and system for positioning characteristic degradation scene, electronic equipment and readable storage medium
CN116429116A (en) * 2023-04-20 2023-07-14 山东新一代信息产业技术研究院有限公司 Robot positioning method and equipment
CN116381717A (en) * 2023-04-25 2023-07-04 广东石油化工学院 Unmanned aerial vehicle positioning device and positioning method based on Kalman filtering algorithm
CN219676293U (en) * 2023-04-25 2023-09-12 广东石油化工学院 Unmanned aerial vehicle positioner based on Kalman filtering algorithm
CN117218350A (en) * 2023-09-19 2023-12-12 中南林业科技大学 SLAM implementation method and system based on solid-state radar

Also Published As

Publication number Publication date
CN117451033B (en) 2024-05-14

Similar Documents

Publication Publication Date Title
CN112179330B (en) Pose determination method and device of mobile equipment
US10269147B2 (en) Real-time camera position estimation with drift mitigation in incremental structure from motion
US20200356818A1 (en) Logo detection
CN108960211B (en) Multi-target human body posture detection method and system
US10789717B2 (en) Apparatus and method of learning pose of moving object
US10269148B2 (en) Real-time image undistortion for incremental 3D reconstruction
CN109683699B (en) Method and device for realizing augmented reality based on deep learning and mobile terminal
US20180315232A1 (en) Real-time incremental 3d reconstruction of sensor data
WO2019042426A1 (en) Augmented reality scene processing method and apparatus, and computer storage medium
CN110648397B (en) Scene map generation method and device, storage medium and electronic equipment
US20150036916A1 (en) Stereo-motion method of three-dimensional (3-d) structure information extraction from a video for fusion with 3-d point cloud data
CN110111388B (en) Three-dimensional object pose parameter estimation method and visual equipment
CN112328715A (en) Visual positioning method, training method of related model, related device and equipment
CN114743259A (en) Pose estimation method, pose estimation system, terminal, storage medium and application
CN102103457A (en) Briefing operating system and method
CN112198878B (en) Instant map construction method and device, robot and storage medium
CN111709317B (en) Pedestrian re-identification method based on multi-scale features under saliency model
CN110111364B (en) Motion detection method and device, electronic equipment and storage medium
CN115705651A (en) Video motion estimation method, device, equipment and computer readable storage medium
CN117451033B (en) Synchronous positioning and map construction method, device, terminal and medium
CN110580462B (en) Natural scene text detection method and system based on non-local network
CN109816709B (en) Monocular camera-based depth estimation method, device and equipment
CN110660091A (en) Image registration processing method and device and photographing correction operation system
CN113592922A (en) Image registration processing method and device
Li et al. Efficient and precise visual location estimation by effective priority matching-based pose verification in edge-cloud collaborative IoT

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant