CN117291973B - Quick robust initial positioning method based on laser point cloud - Google Patents

Quick robust initial positioning method based on laser point cloud Download PDF

Info

Publication number
CN117291973B
CN117291973B CN202311583281.3A CN202311583281A CN117291973B CN 117291973 B CN117291973 B CN 117291973B CN 202311583281 A CN202311583281 A CN 202311583281A CN 117291973 B CN117291973 B CN 117291973B
Authority
CN
China
Prior art keywords
point cloud
point
points
matrix
weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311583281.3A
Other languages
Chinese (zh)
Other versions
CN117291973A (en
Inventor
陈诗洋
陈晓锋
王金龙
李大川
张元方
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
City Lights Shenzhen Driverless Co ltd
Original Assignee
City Lights Shenzhen Driverless Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by City Lights Shenzhen Driverless Co ltd filed Critical City Lights Shenzhen Driverless Co ltd
Priority to CN202311583281.3A priority Critical patent/CN117291973B/en
Publication of CN117291973A publication Critical patent/CN117291973A/en
Application granted granted Critical
Publication of CN117291973B publication Critical patent/CN117291973B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a quick and robust initial positioning method based on laser point cloud, which is used for acquiring global initial positioning only by scanning surrounding environment information through a laser radar under the condition of no global input of GNSS (Global navigation satellite System) in an automatic driving scene. The multi-line laser radar input by the method and the domain controller are applied to calculate the pose. The method comprises the steps of point cloud preprocessing and point cloud registration, wherein the point cloud preprocessing comprises the steps of binding a historical key frame and a pose thereof, and obtaining point cloud with smaller initial noise by removing ground points and noise points inside; the point cloud registration comprises the steps of performing rough matching on the point cloud to obtain a pairing block, constructing a graph matrix, further constructing a weight matrix through the graph matrix, predicting an initial pose by using a weight solving method, further iterating to optimize the initial pose and the score again, finally selecting a key frame index with the maximum score, and outputting the pose.

Description

Quick robust initial positioning method based on laser point cloud
Technical Field
The invention relates to the technical field of repositioning, in particular to a quick and robust initial positioning method based on laser point cloud.
Background
The repositioning mainly solves the problem that when the state estimation of the robot is lost, the current gesture is retrieved again, and the state estimation is restored. For vision, the monocular VSLAM algorithm uses more of a BoW-based matching scheme (ORB-SLAM, VINS, etc.), as well as a machine learning matching patch-based method (PTAM) and some brute force matching solutions, and uses threads alone to match all key frames prior to brute force, such as the LSD algorithm. Some of the perturbation-only initial values are continually matched to the last key frame, such as the DSO algorithm. The main stream algorithm is similar to the ORB-SLAM repositioning thought, and we can see that the main stream method is to search the key frame database to quickly search the candidate repositioning frames similar to the current frame, which is also the advantage of the image features.
For laser radar, most of common methods except traversal comparison method are that matching comparison between frames is completed through ICP and NDT, and common loop detection methods of scan context and Iris can be used for adapting weight positioning. There is also a deep learning process using SegMatch, segMap, LOL, overlapNet.
Compared with a method of fast matching such as a visual passing word bag model, although a laser radar also has a method of compressing laser characteristics such as scan context, in practice, the method still cannot meet the demands of us, because the precision is not high, only the maximum value in the z direction is extracted to form a two-dimensional matrix rolling matching, so that the geometric information of a large amount of laser point clouds in the z direction is lost, the application effect in radars of different wire harnesses is poor, and successful matching cannot be realized when the historical key frame point clouds differ from the current input frame point clouds by 180 degrees, and the method can mainly adopt two ways of improving characteristics: local features and global features. local feature mainly extracts feature points and matches. The features are used for position identification, and because the features are local, if the local features are directly used for the whole map, the matching success rate is lower; the global feature refers to analyzing some characteristics of the point cloud and abstracting the characteristics of the entire input point cloud, for example, obtaining a histogram (scan context and SegMatch concept) of the point cloud, and then matching the histogram. This has the disadvantage that the robot position changes slightly and the global feature of the input point cloud may be different. I.e., global feature, is less consistent. The Iris work is improved to some extent, but still requires the addition of local features to achieve a fine match.
Thus, the prior art is still to be improved.
Disclosure of Invention
In view of the shortcomings of the prior art, the invention aims to provide a quick, accurate and robust initial positioning method with small occupied memory, so that the carried carrier can realize quick initial positioning in a complex environment, and the subsequent operations such as repositioning and looping are convenient to be carried out again by losing the pose.
The technical scheme of the invention is as follows:
a fast robust initial positioning method based on laser point cloud, the method comprising point cloud preprocessing and point cloud registration;
the point cloud preprocessing comprises the following steps:
reading historical key frame point cloud information and frame pose information and binding;
reading a current point cloud frame and a historical key frame to remove ground points and noise points for filtering;
the point cloud registration includes:
performing rough matching on the point cloud to obtain a pairing block;
calculating the relation between the characteristic points and the characteristic points of the next frame through the pairing blocks, and constructing a graph matrix and an information matrix;
constructing a weight matrix, guiding by selecting the first K pairing points with larger weights, and predicting initial pose and score by using a weight solving method;
and iteratively optimizing the initial pose and the score, selecting the key frame index with the maximum score, and outputting the pose.
In one embodiment, the removing the ground points specifically includes:
the method comprises the steps of receiving point clouds needing preprocessing, sorting the input point clouds from small to large according to Z values, and removing points smaller than a preset height threshold according to the preset height threshold;
deducing the ground height through the mounting height of the point cloud and the height of the mobile carrier, extracting seed points of the ground points, estimating normal vectors and height distance threshold values of the planes of the seed points through a plane pre-estimating method, and iteratively removing non-ground points;
and calculating the projection of each seed point in the direction of the normal of the ground, judging whether the projection is smaller than the ground threshold according to the projection in the direction, storing the projection as the ground point, repeating iteration, and outputting the point cloud after removing the ground point in the last iteration.
In one embodiment, before the rough matching of the point cloud, the method further includes:
reading point clouds of a current frame and a historical key frame, performing rough registration in pairs, searching out two nearest points by using a Kd tree, calculating the distance between the two points, summing all the distances, dividing the sum by the number of points in the point clouds to obtain an approximate point cloud resolution value of each frame, and solving the average resolution of the two frames of point clouds; and downsampling the point clouds by using voxel filtering, reducing the number density of the point clouds, and extracting feature points.
In one embodiment, the extracting the feature point specifically includes:
the feature descriptors for respectively calculating the fast point feature histograms of the two-frame point cloud specifically comprise:
simplifying the point feature histogram; taking a group of corresponding tuples between each point to be calculated and the adjacent points as a simplified point characteristic histogram;
a fast point feature histogram; the following formula is used to refer to,
for each point, its k neighbors are redetermined and the final histogram of the point is weighted with the simplified point feature histogram of the neighboring point to obtain a feature descriptor of the fast point feature histogram.
In one embodiment, after the method of constructing the graph matrix and the information matrix, the method further comprises:
initializing information of a graph matrix library, and if the cluster threshold is greater than 3 and the corresponding number is greater than 100, reducing the size of clusters; otherwise, directly traversing the rows and columns of the graph matrix, and copying non-zero elements into a graph g;
the graph matrix in the c++ library is then called to find all the biggest cliques in the graph g as biggest cliques by using the method of maximum click of the graph.
In one embodiment, after the method of finding all maximum cliques in the graph g as maximum cliques, the method further comprises:
And using weight matrix information, selecting by taking the nodes as guidance, simplifying the maximum groups, and only leaving a few groups for subsequent registration.
In one embodiment, the particular method of the maximum bolus comprises:
and finding out the maximum groups with the maximum weight of each node according to the sum of the weights of the edges in the maximum groups, adding all the maximum groups with the maximum weight of each node to obtain a weight, and finally reserving the maximum groups with the maximum K scores by selecting and updating the maximum group information of the nodes.
In one embodiment, the constructing the weight matrix specifically includes:
judging by using the weight, if the weight is not greater than the threshold value, setting the weight to 0, and discarding; a weighted least square method based on singular value decomposition of pcl is used for estimating a rigid transformation matrix between point clouds;
carrying out iterative one-time solving, if the weight meets the requirement, constructing a weight matrix, wherein the size of the matrix is the number of points in the point cloud, and filling according to the weight vector;
creating an identity matrix with the same size as the weight matrix, setting the identity matrix as the identity matrix, performing element-by-element multiplication operation on the weight matrix, and multiplying the weight matrix by the identity matrix to obtain a final weight matrix.
In one embodiment, the iteratively optimizing the initial pose and score specifically includes:
Traversing each point in the source point cloud, calculating the distance between the point and the corresponding point in the target point cloud, and calculating a score according to the distance and the weight, wherein the iteration times are given;
if the distance between the matched points is smaller than the threshold value of the internal point, storing the matched points, and calculating weights and scores;
if the current score is lower than the previous optimal score, ending iteration, otherwise, extracting points corresponding to the inner point index from the source point cloud pairing and the target point cloud, constructing a predicted source point cloud and a predicted target point cloud, obtaining corresponding weight assignment, and optimizing the predicted source point cloud and the predicted target point cloud by using a weighted solving method to obtain an updated transformation matrix;
repeating the steps until the specified iteration times are reached or the score is not increased.
In one embodiment, the selecting the key frame index with the largest score, and outputting the pose specifically includes:
and obtaining registration scores of two frames, finding out the maximum score and index, outputting corresponding key frame pose information, converting a historical key frame into a current key frame coordinate system, and outputting the current pose of the mobile carrier.
The beneficial effects are that: compared with the prior art, the invention obtains global initial positioning by only scanning surrounding environment information through a laser radar under the condition of no global input of GNSS in an automatic driving scene, solves the pose by applying the multi-line laser radar and a domain controller input by the method, and can quickly and accurately find the pose of the mobile carrier in a map in a complex environment by removing a ground algorithm and reducing the previous key method of maximum group constraint, thereby being convenient for the subsequent pose estimation; meanwhile, by adopting a multithreading method and the like, the speed is further improved, and therefore the mobile carrier is rapidly and initially positioned in a complex scene.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
fig. 1 is a flowchart of specific steps of a fast robust initial positioning method based on laser point cloud according to an embodiment of the present invention;
FIG. 2 is a flowchart of a fast robust initial positioning method based on laser point cloud for removing ground points according to an embodiment of the present invention;
FIG. 3 is a flow chart of coarse registration of a laser point cloud-based fast robust initial positioning method provided by an embodiment of the present invention;
fig. 4 is a flowchart of a fast robust initial positioning method based on laser point cloud according to an embodiment of the present invention, in which FPFH is used to extract descriptors;
FIG. 5 is a flowchart of a weight matrix diagram of a fast robust initial positioning method based on laser point cloud according to an embodiment of the present invention;
FIG. 6 is a flowchart of calculating an optimal threshold value according to a fast robust initial positioning method based on laser point cloud according to an embodiment of the present invention;
fig. 7 is a flowchart of searching all maximum clusters from a weight matrix according to a fast robust initial positioning method based on laser point cloud provided by an embodiment of the present invention;
FIG. 8 is a flow chart of a fast robust initial positioning method based on laser point cloud for selecting guide clusters from all maximum clusters according to an embodiment of the present invention;
FIG. 9 is a flowchart of a fast robust initial positioning method based on laser point cloud according to an embodiment of the present invention, wherein the method performs weight solving calculation to solve a predicted pose through a guide group, and performs iteration;
fig. 10 is a registration effect diagram of a laser point cloud-based fast robust initial positioning method according to an embodiment of the present invention, which is rotated to 90 degrees for a z-axis of a two-frame point cloud;
fig. 11 is a graph of registration effect of 90 degrees of z-axis rotation of two frames of point clouds according to ndt algorithm of the fast robust initial positioning method based on the laser point clouds provided by the embodiment of the invention;
fig. 12 is a registration effect diagram of a pcl algorithm of a fast robust initial positioning method based on laser point cloud provided by an embodiment of the present invention, which rotates to 90 degrees for a z-axis of two-frame point cloud;
fig. 13 is a registration effect diagram of a laser point cloud-based fast robust initial positioning method according to an embodiment of the present invention, in which the z-axis rotation of two frames of point clouds is 180 degrees.
Detailed Description
The present invention will be described in further detail below in order to make the objects, technical solutions and effects of the present invention more clear and distinct. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention. Embodiments of the present invention are described below with reference to the accompanying drawings.
The laser point cloud-based rapid and robust initial positioning method provided by the embodiment is used for a movable carrier, such as an unmanned vehicle, a robot and other vehicles, robot equipment and scenes which need to realize an automatic driving function. In a complex environment, the pose of the mobile carrier in the map is quickly and accurately found, so that the pose estimation is convenient to follow-up. However, since the inter-array matching algorithm is generally time-consuming, if a large number of key frames are detected rapidly to realize initial positioning, most algorithms operate at a relatively slow speed and occupy memory.
In order to realize a quick and accurate initial positioning method with small occupied memory, a mobile carrier is enabled to realize quick initial positioning in a complex environment, and subsequent operations such as repositioning, looping and the like are facilitated for the lost pose. Therefore, the invention provides an algorithm for quickly realizing initial positioning. The key approach is to remove the ground algorithm and reduce the previous maximum cluster constraint, see fig. 1-13, and the method specifically includes the following steps:
the method comprises the steps of point cloud preprocessing and point cloud registration;
the point cloud preprocessing comprises the following steps:
s1: reading historical key frame point cloud information and frame pose information and binding;
In the embodiment of the present invention, the specific process of step S1 is:
reading a historical key frame container, acquiring a historical key frame and a corresponding pose stored in the historical key frame container, binding indexes corresponding to the historical key frame container and the historical key frame container, performing ground and noise point removing operation, and removing ground points and noise;
s2: reading a current point cloud frame and a historical key frame to remove ground points and noise points for filtering;
referring to fig. 2, in the embodiment of the present invention, the specific process of removing the ground points in step S2 is as follows:
s2-1: firstly, the point clouds needing to be preprocessed are received, and the input point clouds are ordered according to Z values from small to large, so that the subsequent traversal process can be accelerated. Removing points smaller than a preset height threshold (min_h) according to the preset height threshold (min_h), and removing values which are obviously not ground points;
s2-2: roughly reasoning about the approximate ground height (th_seeds_heights) through the mounting height of the point cloud and the height of the mobile carrier, extracting seed points of the ground points, namely setting a threshold value by calculating the height average value of 10 lowest points from small to large, and reserving the point cloud lower than the threshold value as the seed points;
s2-3: then estimating the normal vector of the plane of the seed point group_pc and the height distance threshold th_dist_d_ by an estimate_plane_method, and iteratively removing non-ground points;
S2-4: and calculating the projection of each point in the direction of the normal of the ground, judging whether the projection is smaller than a ground threshold value according to the projection in the direction, storing the projection as the ground point, repeating the steps S2-3 and S2-4 for iteration, outputting the point cloud after removing the ground point for the last iteration, and optionally, judging whether the ground point is reduced to be a two-dimensional ground point.
Further, wherein the estimate_plane method is used to estimate the normal vector and distance threshold of the plane of the group_pc seed point. The method mainly comprises the following steps:
firstly, calculating covariance matrix and mean value of point cloud of seed points, then performing singular value decomposition on the covariance matrix to obtain a feature vector matrix, performing principal component analysis on PCA, and selecting a third column of the feature vector matrix as a normal vector of a ground plane. And finally, calculating the projection of the seed point cloud on the normal vector according to the average value of the normal vector and the seed point cloud, and calculating a height distance threshold value th_dist_.
In a further embodiment, the point cloud registration comprises:
s3: performing rough matching on the point cloud to obtain a pairing block;
specifically, before the rough matching is performed on the point cloud, the method further includes:
s3-1: and reading the point clouds of the current frame and the historical key frame, and calculating the resolution of the point clouds. Coarse registration is carried out in pairs, a Kd tree is used for searching nearest neighbor point pairs, for i points, a Kd tree is used for searching out two nearest points, the distance between the two points is calculated, all the distances are summed and divided by the number of points in the point cloud, an approximate point cloud resolution value of each frame is obtained, and the average resolution of the two frames of point clouds is calculated; downsampling the point cloud by using voxel filtering, wherein in the embodiment of the invention, the size of the voxel filter is set to be 0.7m;
Further, the method for extracting the feature points specifically includes:
respectively calculating FPFH feature descriptors of two frames of point clouds, and respectively storing the FPFH feature descriptors into src_feature and des_feature variables; the details of calculating the FPFH are as follows:
first Step (SPFH): the corresponding set of tuples (α, Φ, θ) between each point to be calculated Pq and its neighbors is called a Simplified Point Feature Histogram (SPFH), and the direct difference from PFH is that the number of parameter tuples between the neighbors is reduced, and only the parameter tuples between the point to be calculated and its neighbors are calculated.
Second step (FPFH): using equation 1 below, for each point, its k neighbors are redetermined, weighting the final histogram of Pq with the SPFH of the neighboring points.
And a third step of: and performing coarse registration on the down-sampled point cloud and the corresponding feature descriptors to obtain a pairing block, and placing the generated pairing block into the original correspondence.
Still further, referring to fig. 3, the method for performing coarse registration in the step S3-1 includes:
s3-2-1: first, two SHOT feature description sub-point cloud objects are created and are respectively used for storing feature descriptors of a source point cloud and a target point cloud. The method comprises the steps of adjusting the size of a SHOT characteristic description sub-point cloud object to enable the SHOT characteristic description sub-point cloud object to be matched with the number of characteristic vectors of a source point cloud and a target point cloud; then traversing the feature vector of the source point cloud, and copying the value of each element into the SHOT feature description sub-point cloud object at the corresponding position; if the length of the feature vector is less than 352 (the length of the SHOT feature descriptor), the remaining positions are filled with 0. Traversing the feature vector of the target point cloud, and executing the same operation as the third step;
S3-2-2: then creating a Kd tree object, taking a SHOT feature descriptor of the target point cloud as input of the Kd tree object, traversing the SHOT feature descriptor of the source point cloud, and carrying out nearest neighbor search by using the Kd tree to find a best matching target feature descriptor corresponding to each source feature descriptor;
s3-2-3: finally, creating a Corre_3DMatch structure object, storing the matched source point cloud index, the matched target point cloud index, the coordinates of the source point and the target point and the matched score, adding the matched result into a Corres vector, and storing all matched pairs;
s3-2-4: by comparing the SHOT feature descriptors of the two point clouds, the best matching pair is found. The matching result is saved by Corre_3DMatch structure and stored in the raw_corruspondence vector, and the number of raw_corrustence is downsampled, 50 random numbers are generated by using boost_rand, and the random numbers are selected to be stored in new corrustence for subsequent processing.
Further, the point cloud registration method further includes:
s4: calculating the relation between the characteristic points and the characteristic points of the next frame through the pairing blocks, and constructing a graph matrix and an information matrix;
further, the specific process of constructing the graph matrix graph in step S4 includes:
S4-1: referring to fig. 5, an initial approach is usedInformation, using equation 2, calculate 2 pairs of paired points i, j, which are the difference between the distances of two frames, i.e., how much +.>I generates a distance score with all other points, including itself.
The score was calculated using equation 3:
dcmp is a given parameter, scmp has a score of 0 to 1, and the score is the largest, which means that the smaller the distance inside, the better the effect and the closer the feature points are.
S4-2: a WFOG matrix is then generated. If the Scmp score is greater than the threshold tcmp, then a value within one edge eij is generated that is Scmp, otherwise the corresponding position is set to 0. If there are N pairs of matching points, by traversing the relationships between N all points and all points, a WFOG matrix is generated, the size of which is n×n.
S4-3, generating a WSOG matrix, using a formula 4, carrying out cross multiplication on the cmp_score matrix, and carrying out dot formation, namely, compared with FOG, SOG has stricter adjacent nodes and sparser SOG;
next, using the above matrix, weight information is calculated, and if the graph matrix is not 0, the adjacent condition between each point in each frame and other points is counted, weight information of each node in the MAC is calculated, and score weight information of each point and other points is accumulated, as shown in fig. 6. For each node i, the cumulative weight information is calculated using the following equation 5,
And calculating the number of the adjacent points of i, calculating a weight factor, and calculating the factor score sum and average factor score of all the points. By this calculation, the degree of the surrounding feature conspicuity of the i feature point can be judged, and the surrounding is the concentrated effective feature. And finally, sorting the cluster factors and the score factors, calculating a threshold value OTSU by using an OTSU_thresh threshold segmentation method according to the scores of the cluster factors, and setting related parameters according to the threshold value. The threshold value cluster threshold is taken as the minimum of the OTSU, the average factor of the average factor and the total factor sum.
By this step we then generate a adjacency matrix storage relationship graph for the paired blocks.
Specifically, firstly, whether the two norms of the Graph matrix are 0 is judged, if the two norms are 0, the Graph matrix is abnormal, no adjacent edge exists, and the operation is stopped. If normal, each point in each frame is counted, adjacency to other points is counted, and indexes of corresponding points which are not 0 are counted. If Vote_exp object is created, index i line i-! At this time, all numbers of scores of 0 in the graph are stored by using a degree, and the corresponding adjacent subscripts are recorded and stored in a container vector < int > code_index.
The weight information for each node in the MAC is then calculated. Setting the number of OpenMP threads as 12, dividing the OpenMP threads into a plurality of threads to calculate when traversing to the ith row, calculating the weight, and reserving the cluster with the maximum weight of the node. When the feature points corresponding to the ith row are the second adjacent, for example, the root 2,3,4 and … respectively calculate the accumulated information once, and the third adjacent, for example, the root 3,4 and 5 respectively calculate the accumulated information once. The specific calculation is as follows:
for each node i, the number index_size of neighboring nodes of the node i is obtained, and the following loops are parallelized: for the neighbor index j, loop from 0 to index_size-1: acquiring index of adjacent node a for adjacent node index k, looping from j+1 to index_size-1: acquiring the index of the adjacent node b if Graph (a, b) is not 0: the critical section (critical) is used to ensure that the variable is updated safely by multithreading, and the third power of Graph (i, a) x Graph (i, b) x Graph (a, b) is calculated and accumulated into wijk.
S4-5: calculating a weight factor, judging the i-th point obtained by calculation, if the number of the i-th point is greater than or equal to 1, obtaining the weight factor by using the relative relation of the weight dividing number, adding the index of the i-th point and the weight factor to an object t, and storing in a cluster_factor, otherwise, setting the corresponding weight factor to be 0; by this calculation, the degree of the surrounding feature conspicuity of the i feature point can be judged, and the surrounding is the concentrated effective feature. The method comprises the following specific steps:
If the number of adjacent nodes of the node i is greater than 1: the calculation factor f1 is wijk and the calculation factor f2 is the degree [ i ] × (degree [ i ] -1) ×0.5; accumulating f1 into the molecular sum sum_fenzi and accumulating f2 into the denominator sum sum_fenmu; calculating the ratio factor of the factors to be f1/f2; the index of the temporary variable t is updated to be i, the score is factor, and the element t is added into the cluster factor vector cluster_factor.
Otherwise (degree of node i is 1): and updating the index of the temporary variable t to be i, wherein the score is 0, adding the element t into the cluster factor vector cluster_factor, and finally calculating the factor score sum and the average factor score of all points.
S4-6: sequencing and threshold calculation. The cluster factor vector cluster_factor is ranked according to the score from small to large, and the node degree vector pts_degree is ranked according to the degree from small to large. The cluster_factor [ i ]. Score, the score in the cluster factor vector is assigned to the cluster_coeffients (ordered from small to large).
And then begins to calculate the threshold. And calling the OTSU_thresh threshold segmentation to calculate a threshold OTSU according to the score of the clustering factor, and setting related parameters according to the threshold. The threshold value cluster threshold is taken as the minimum of the OTSU, the average factor of the average factor and the total factor sum.
The specific steps of calculating the threshold value are as follows:
variables cnt and OTSU are first initialized and set to a value of 0. If the score of the first element in the cluster_factor is not equal to 0, the OTSU_thresh thresholding method calculates the best threshold for the cluster_coeffients and assigns the result to the OTSU.
Then calculate the cluster_threshold, which is the minimum of the three OTSU, average_factor and total_factor. The weight_threshold value is set to be the cluster_threshold or a specific fixed value of 0.5 or 0 according to the add_overlap condition judgment.
If add_overlap is false, the score of each element in corespondance of the paired block point is set to the score of the corresponding position in cluster_factor_bac (unordered), ending the flow.
The specific steps of the OTSU_thresh threshold segmentation inside the method for calculating the optimal threshold are as follows:
first, some variables are defined, including counters, score histograms, score sum histograms, etc. Next, the scores in the input numerical sequence values are accumulated to calculate the sum of the scores, and each score is added to a vector named all_score. And sorting all_score, and arranging the score values from small to large. The number of bins Quant num of the histogram is determined, which determines how many bins the score value is divided into.
And calculating a histogram interval to which each score belongs, and carrying out statistics. The length of the interval between the maximum score value and the minimum score value is calculated first, and divided by quant_num to obtain the step length quant_step of the interval. Then traversing the value sequence value, adding one to the corresponding histogram interval counter according to the interval to which each score belongs, and accumulating the score into the corresponding histogram interval sum.
And finally traversing the histogram interval, calculating the inter-class variance under each threshold value, and recording the maximum inter-class variance and the corresponding threshold value. The method comprises the following steps: for each histogram bin, the number of samples for the current bin is added to n1. If n1 is 0, indicating that there are no samples in the current interval, this iteration is skipped. Otherwise, the number of samples n2 on the other side, i.e. the total number of samples minus n1, is calculated.
And calculates an average score value m1 of the n1 class and an average score value m2 of the n2 class. And calculating an inter-class variance sb under the current threshold, wherein the calculation formula of sb is (n1×n2× (m 1-m 2)/(2). If the current inter-class variance sb is greater than the previously recorded maximum inter-class variance fmax, fmax and the threshold thresh are updated. And finally, obtaining an optimal threshold value thresh.
Still further, referring to fig. 7, after the method for constructing the graph matrix and the information matrix, the method further includes:
s5: searching all maximum clusters through the graphj matrix, wherein the method specifically comprises the following steps:
s5-1, firstly creating a graph g (g_mat) of the iggraph_t type and a matrix g_mat of the iggraph_matrix_t type, wherein the matrix g_mat is used for representing the adjacency relation of the graph. The vector weights of the iggraph_vector_t type are then initialized to a length of half the number of rows by the number of columns minus one, i.e., the number of edges in the Graph. Then check given conditions, if cluster_threshold is greater than 3 and the number of corruspore is greater than 100, then the following operations are performed to reduce the scale of the graph: initializing a scale factor f to 10; and enters a loop in which the value of f is dynamically adjusted according to the comparison of the scale factor f with the threshold value until the condition f x max (OTSU, total_factor) < = cluster_factor [99]. Score is satisfied. In the loop, the rows and columns of the graph are traversed and elements satisfying the condition cluster_factor_bac [ i ]. Score > f×max (OTSU, total_factor) are copied into g_mat.
If the condition is not met, i.e., the cluster_threshold is not greater than 3 or the corruspore size is not greater than 100, then the rows and columns of the graph are traversed and the non-zero elements are copied into g_mat.
S5-2: invoking an iggraph_set_attribute_table function in a c++ library, setting the attribute of the graph, indicating that the graph is undirected, wherein an index of 0 of the graph indicates that the edge weight of the graph is 0 in an adjacent matrix, an index of 1 of the graph indicates that the edge weight of the graph is 1 in a weight vector, and setting each edge as corresponding weight information.
S5-3: with continued reference to fig. 7, all the maximum cliques in the iggraph are found, and the maximum cliques with 3 nodes, such as c1, c3, c5, c1, c6, c7, etc. of the index 1 feature points are found. By calling the iggraph_maximum_cliques function, find all the maximum cliques in graph g, and store the result in cliques. The parameters of the function include graph g, vector cliques storing the maximum clique, and two integer parameters, without considering the weight of the maximum clique. And the size of the maximum cliques vector cliques, namely the number of the maximum cliques, is obtained by calling an iggraph_vector_ptr_size function. If the number of maximum clusters is 0, no maximum clusters are found; through the steps, all the maximum clusters in the graph are successfully found.
Further, referring to fig. 8, after the method of finding all the maximum clusters in the graph g as maximum clusters, the method further includes:
S6: the weight matrix information is used, nodes are used as guidance to select, the maximum groups are simplified, and only a few groups are left for subsequent registration, and the method specifically comprises the following steps:
s6-1: and guiding by using the weight matrix information, calculating the weight of each cluster for each node, finding the maximum first K clusters, and only reserving the clusters to improve the operation speed. The specific steps are as follows:
s6-1-1: and initializing a result array result, and initializing the maximum group index, the size, the weight and the number of each node to be a default value. Then traversing the to-be-processed maximum clique index list domain, calculating weights for each maximum clique, namely calculating the sum of the weights of edges between each pair of nodes in the maximum clique. And then updating the maximum group information of each node, selecting the maximum group with the maximum weight as the maximum group to which the node belongs, and updating the index, the size and the weight of the maximum group. If the unselected maximum cliques are cleaned up, the remain list is updated.
S6-1-2: the largest clusters are then further screened for some conditions, such as normal vector consistency checks. If the maximum clique number exceeds the estimated maximum clique number est num, then the descending order of the weights is ordered and the top est num maximum cliques with the greatest weights are retained.
Further, the details of the sum of the calculated weights in the step S6-1 are as follows:
the maximum clique index list to be processed remains is traversed first. For each maximum clique, its pointer v is obtained, and the weight is initialized to 0, and the number of maximum clique nodes is obtained.
By traversing the nodes in the biggest clique, the sum of the weights of the edges between each pair of nodes is calculated by a two-layer loop and accumulated into weight.
And traversing the nodes in the maximum cliques again, if the weight of the maximum cliques to which the nodes belong is smaller than the weight obtained by current calculation, updating the index, the size and the weight of the maximum cliques to which the nodes belong, and increasing the count of the maximum cliques. And increment the count of the maximum clique in the vis array for subsequent cleaning operations.
After the loop is completed, the maximum clique information of each node is updated to the maximum clique with the greatest weight, and the vis array records the count of each maximum clique.
Through twice cycle traversal, the objective is to find the maximum cluster with the maximum weight of each node according to the sum of the weights of the edges in the maximum cluster, for example, the maximum cluster is equal to the pairing point of the ith row, the 0 node and the 1,2,3 … node, the 1 node and the 2,3,4 node are added in pairs to obtain a weight, and finally, the highest score is reserved by selecting different i. And updates the maximum clique information of the node. Traversing the nodes in the maximum cliques again, if the weight of the maximum cliques to which the nodes belong is smaller than the weight obtained by current calculation, updating the index, the size and the weight of the maximum cliques to which the nodes belong, and increasing the count of the maximum cliques.
Further, in the step S6, the method for selecting and simplifying the maximum group by using the node as a guide further includes:
s6-2: the number of maximum clusters is reduced to preserve a specified number of maximum clusters. It is first checked whether the current maximum number of cliques to be processed exceeds a specified maximum number of cliques (est_num). If so, an integer vector named after_decline is created for storing the index of the maximum clique that needs to be reserved. And creates a vector named clique score for storing the score information of the biggest clique. Traversing all nodes, and skipping the node if the index of the maximum group is smaller than 0 for the maximum group information of each node; if not, adding the index and the score of the maximum clique into the clique_score, sorting the clique_score according to the score, selecting the index of the previous est_num maximum cliques from the sorted clique_score from high to low, and adding the index into the after_decline.
And finally, clearing the maximum group index list domain to be processed, assigning the content of the after_decline to the domain, clearing the content of the clique_score, and releasing the memory of the dynamically allocated vis array.
Further, the method for point cloud registration further comprises:
S7: and constructing a weight matrix, guiding by selecting the first K pairing points with larger weights, and predicting the initial pose and the score by using a weight solving method.
Specifically, in step S7, the predicting the initial pose and the score by using the weight solving method specifically includes:
s7-1: and carrying out SVD solving on the selected clique by using the weight information. The overall part is divided into 5 steps as follows:
1. creating a source point cloud src_pts, a target point cloud des_pts and a weight vector weight, wherein the source point cloud src_pts, the target point cloud des_pts and the weight vector weight are used for storing coordinates and weights of matching point pairs meeting a weight threshold;
2. if the number of matching point pairs meeting the weight threshold is less than 3, returning a score of 0 to indicate that the conversion is invalid;
3. normalizing the weight vector to be within the range of [0, 1 ];
4. if overlap or consistency between instances is not considered, the weight vector is set to all 1;
5. SVD decomposition is performed according to weights using the weight_SVD function and a transformation matrix trans is calculated.
Still further, referring to fig. 9, the constructing a weight matrix specifically includes:
s7-2: judging by using the weight, if the weight is not greater than the threshold value, setting the weight to 0, and discarding; and using a weighted least square method of pcl based on singular value decomposition for estimating a rigid transformation matrix between the point clouds. Then, carrying out iterative one-time solving:
S7-2-1: firstly, constructing a weight matrix weight, wherein the size of the weight matrix is the number of points in the point cloud, and filling the weight matrix according to the weight vector. And creates Identity of Identity matrix of the same size as the weight matrix and sets it as Identity matrix. And then performing element-by-element multiplication operation on the weight matrix, and multiplying the weight matrix by the identity matrix to obtain a final weight matrix weight.
S7-2-2: and then calculating the barycenter coordinates of the source point cloud and the target point cloud. And recalculating a mean reduction matrix of the source point cloud and the target point cloud, and subtracting the corresponding centroid coordinates from each point coordinate. And finally, calculating a matrix H of a rotating part of the transformation matrix, wherein the matrix H is obtained by multiplying a source point cloud minus mean matrix, a weight matrix and a target point cloud minus mean matrix by a transpose, and taking a 3x3 submatrix at the upper left corner of the matrix.
S7-3: the bit gesture score is optimized again until the score is not increased or the iteration times are reached, the iteration is carried out on the matching points of the corruspondence, and the specific steps of optimizing again are as follows:
post-processing optimization is first performed within a given number of iterations. Traversing each point in the source point cloud, calculating the distance between the point and the corresponding point in the target point cloud, and calculating the score according to the distance and the weight. If the distance of j pair points is less than the inlier_thresh inner point threshold, pred_inlier_index stores pair point j and calculates weights using equation 6 and score using equation 7;
(6);
(7);
A determination is then made to terminate the iteration if the current score is lower than the previous best score. Otherwise, the current score is calculated to be higher, and the optimal score and the transformation matrix are updated;
and finally, extracting points corresponding to the inner point index j from the source point cloud pairing and the target point cloud pairing to construct a predicted source point cloud and a predicted target point cloud, obtaining corresponding weights to assign to weight_pred, and optimizing the predicted source point cloud and the predicted target point cloud by using a weighted solution method weight_SVD to obtain an updated transformation matrix. Repeating the steps until the specified iteration times are reached or the score is not increased.
And S7-4, displaying the finally selected clusters between origin clouds and displaying the effect after registration.
Creating a viewer object by using a pcl library, visualizing the selected cliques by using line connection, and comparing the registered point cloud effect after updating with the original point cloud effect for visualization.
Still further, the point cloud registration further includes:
s8: and iteratively optimizing the initial pose and the score, selecting the key frame index with the maximum score, and outputting the pose.
Specifically, in the step S8, selecting the key frame index with the largest score, and outputting the pose specifically includes:
S8-1: firstly, acquiring two-frame registration scores in the step S7, recording the two-frame registration scores bestScore by using a global variable maxScore when traversing the current frame and the historical frame, and updating the maxScore and the maxIndex index when the two-frame registration score bestScore is larger than the maxScore;
s8-2: traversing the history key frames, finding a maxIndex index number, skipping out the cyclic search, and outputting corresponding key frame pose information;
s8-3: and finally, converting the historical key frame into a current key frame coordinate system, and outputting the current pose of the mobile vehicle.
Example 1
Referring to the flowchart of fig. 1, the specific steps implemented by the method are shown, firstly, 1000 frames of historical key frames and corresponding poses are read, ground removing processing is performed on the historical key frames, frames of the current laser radar are obtained, frame-by-frame matching is performed on the 1000 frames, historical key frames with optimal registration are found, and the indexed poses are converted into a world coordinate system and output. The test analyzes the registration accuracy and registration time of the current input frame and 97 frames by assuming that the current input frame and 97 frames are closest, and tests the comparison algorithm and ndt algorithm to see whether the current input frame and 97 frames can be found out.
As shown in table 1. The registration speed of two frames of the method is basically kept within 160ms, and when the optimal registration frame is identified, the registration score is particularly large compared with other frames, and the method is easy to identify.
;
As shown in table 2 below, the locations of the current frame and 94 frames are assumed to be most similar. In an experiment, all interframe registration scores are compared by an algorithm, the largest score 31.9999 is found, and the pose corresponding to the key frame 94 is output as expected.
And the registration accuracy of the algorithm is superior to that of ndt and icp algorithms by comparing the registration accuracy of different rotation translation of different algorithms for two frames of point clouds. The specific analysis is as follows, when the two frames of z-axis are different by 30 degrees, the registration performance of icp and ndt is reduced, the two frames of point clouds are sunk into a locally optimal solution, and the two frames of point clouds cannot be registered, but the algorithm can still accurately register and converge. In particular, as shown in fig. 10, 11 and 12, the effect of the algorithm, ndt and icp, is 90 degrees different in the z-axis of two frames, and fig. 13 is the registration effect of the algorithm, which is 180 degrees different in the z-axis, it can be seen that when the z-axis is rotated to 90 degrees, or 180 degrees, etc., ndt and icp registration do not work, and the algorithm can still maintain stable convergence. In an experiment of translating along the x-axis for 5m, the ndt algorithm fails, the registration of the icp registration algorithm is poor, the algorithm can still be converged and kept stable, and the method has strong registration adaptability and good robustness effect and can adapt to the interframe registration of a large range of rotation and translation movements.
In addition, the temporal performance differences of this algorithm from other algorithms were also analyzed by comparing the registration speeds of the different algorithms, as shown in table 3 below. The initial positioning total time efficiency of the algorithm is between that of the icp and ndt algorithms. Further analysis shows that the running speed between the arrays of the algorithm is stable, the variation floating is not large, the influence of the quality and the quantity of the point clouds is small, the ndt and the icp algorithm have longer calculation time when the point clouds are very dissimilar or the number of the point clouds is large, for example, the running efficiency between most time arrays of the icp algorithm is 300 frames-400 frames, the single frame registration time of the algorithm is in jumping change between 150 ms and 400ms, the ndt registration time is between 200ms and 250ms, and the ndt registration time is 800 ms. It should be noted that the larger the maximum group score of the method is, the better the registration effect is; ndt and icp registration scores are smaller the better the registration. Compared with the three methods, the method has the advantages of high operation efficiency, better robustness than the other two, high registration precision and capability of realizing the initial positioning function efficiently and accurately.
In summary, the method disclosed by the invention is used for a movable carrier, such as an unmanned vehicle, a robot and the like, and mainly aims at quickly and robustly acquiring global pose information of the mobile carrier under a satellite signal-free scene. And after binding the historical key frame and the pose thereof, inputting the current frame and the historical key frame, and filtering by removing the ground points and noise points inside to obtain the point cloud with smaller initial noise. According to the point cloud registration method, the current frame and the historical point cloud are input for registration in pairs, coarse matching is firstly carried out to obtain paired blocks, then a map matrix is constructed through downsampling, the first K paired points with larger weights are selected to conduct guiding through a weight matrix, the initial pose position and registration score are predicted through a weight solving method, the initial pose and score are obtained through small constraint relation and rapid operation, and the initial pose of the mobile carrier is found. The invention provides a quick and accurate initial positioning method with small occupied memory, which enables a mobile carrier to realize quick initial positioning in a complex environment, and is convenient for subsequent operations such as repositioning and looping due to lost pose.
It is to be understood that the invention is not limited in its application to the examples described above, but is capable of modification and variation in light of the above teachings by those skilled in the art, and that all such modifications and variations are intended to be included within the scope of the appended claims.

Claims (5)

1. A fast robust initial positioning method based on laser point cloud, the method comprising point cloud preprocessing and point cloud registration;
the point cloud preprocessing comprises the following steps:
reading historical key frame point cloud information and frame pose information and binding;
reading a current point cloud frame and a historical key frame to remove ground points and noise points for filtering;
the point cloud registration includes:
performing rough matching on the point cloud to obtain a pairing block;
calculating the relation between the characteristic points and the characteristic points of the next frame through the pairing blocks, and constructing a graph matrix and an information matrix;
constructing a weight matrix, guiding by selecting the first K pairing points with larger weights, and predicting initial pose and score by using a weight solving method;
iteratively optimizing the initial pose and the score, selecting a key frame index with the maximum score, and outputting the pose;
after the method for constructing the graph matrix and the information matrix, the method further comprises the following steps:
Initializing information of a graph matrix library, and if the cluster threshold is greater than 3 and the corresponding number is greater than 100, reducing the size of clusters; otherwise, directly traversing the rows and columns of the graph matrix, and copying non-zero elements into a graph g;
then, calling a graph matrix in a c++ library, and finding all maximum groups in the graph g as maximum groups by using a method of maximum click quantity of the graph;
after the method of finding all maximum clusters in the graph g as maximum clusters, the method further comprises:
using weight matrix information, selecting by taking nodes as guidance, simplifying the maximum group, and only leaving a plurality of groups for subsequent registration;
the specific method for simplifying the maximum mass comprises the following steps:
according to the sum of the weights of the edges in the maximum groups, finding out the maximum group with the maximum weight of each node, adding all the two groups to obtain a weight, and finally, reserving the maximum groups with the highest K scores and updating the maximum group information of the nodes by selecting;
the construction weight matrix specifically comprises the following steps:
judging by using the weight, if the weight is not greater than the threshold value, setting the weight to 0, and discarding; a weighted least square method based on singular value decomposition of pcl is used for estimating a rigid transformation matrix between point clouds;
Carrying out iterative one-time solving, if the weight meets the requirement, constructing a weight matrix, wherein the size of the matrix is the number of points in the point cloud, and filling according to the weight vector;
creating an identity matrix with the same size as the weight matrix, setting the identity matrix as the identity matrix, performing element-by-element multiplication operation on the weight matrix, and multiplying the weight matrix with the identity matrix to obtain a final weight matrix;
the iteratively optimizing the initial pose and score specifically includes:
traversing each point in the source point cloud, calculating the distance between the point and the corresponding point in the target point cloud, and calculating a score according to the distance and the weight, wherein the iteration times are given;
if the distance between the matched points is smaller than the threshold value of the internal point, storing the matched points, and calculating weights and scores;
if the current score is lower than the previous optimal score, ending iteration, otherwise, extracting points corresponding to the inner point index from the source point cloud pairing and the target point cloud, constructing a predicted source point cloud and a predicted target point cloud, obtaining corresponding weight assignment, and optimizing the predicted source point cloud and the predicted target point cloud by using a weighted solving method to obtain an updated transformation matrix;
repeating the steps until the specified iteration times are reached or the score is not increased.
2. The laser point cloud based fast robust initial positioning method of claim 1, wherein said removing ground points specifically comprises:
the method comprises the steps of receiving point clouds needing preprocessing, sorting the input point clouds from small to large according to Z values, and removing points smaller than a preset height threshold according to the preset height threshold;
deducing the ground height through the laser radar mounting position of the carrier carrying the method, extracting seed points of the ground points, estimating normal vectors and height distance thresholds of the planes of the seed points through a plane pre-estimating method, and iteratively removing non-ground points;
and calculating the projection of each seed point in the direction of the normal of the ground, judging whether the projection is smaller than the ground threshold according to the projection in the direction, storing the projection as the ground point, repeating iteration, and outputting the point cloud after removing the ground point in the last iteration.
3. The laser point cloud based fast robust initial positioning method of claim 1, wherein prior to said rough matching of the point cloud, the method further comprises:
reading point clouds of a current frame and a historical key frame, performing rough registration in pairs, searching out two nearest points by using a Kd tree, calculating the distance between the two points, summing all the distances, dividing the sum by the number of points in the point clouds to obtain an approximate point cloud resolution value of each frame, and solving the average resolution of the two frames of point clouds; and downsampling the point clouds by using voxel filtering, reducing the number density of the point clouds, and extracting feature points.
4. A fast robust initial positioning method based on laser point cloud according to claim 3, wherein the extracting feature points specifically comprises:
the feature descriptors for respectively calculating the fast point feature histograms of the two-frame point cloud specifically comprise:
simplifying the point feature histogram; taking a group of corresponding tuples between each point to be calculated and the adjacent points as a simplified point characteristic histogram;
a fast point feature histogram; the following formula is used to refer to,
for each point, its k neighbors are redetermined and the final histogram of the point is weighted with the simplified point feature histogram of the neighboring point to obtain a feature descriptor of the fast point feature histogram.
5. The laser point cloud based rapid and robust initial positioning method according to claim 1, wherein the selecting the key frame index with the largest score, and outputting the pose specifically comprises:
and obtaining registration scores of two frames, finding out the maximum score and index, outputting corresponding key frame pose information, converting a historical key frame into a current key frame coordinate system, and outputting the current pose of the mobile carrier.
CN202311583281.3A 2023-11-24 2023-11-24 Quick robust initial positioning method based on laser point cloud Active CN117291973B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311583281.3A CN117291973B (en) 2023-11-24 2023-11-24 Quick robust initial positioning method based on laser point cloud

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311583281.3A CN117291973B (en) 2023-11-24 2023-11-24 Quick robust initial positioning method based on laser point cloud

Publications (2)

Publication Number Publication Date
CN117291973A CN117291973A (en) 2023-12-26
CN117291973B true CN117291973B (en) 2024-02-13

Family

ID=89258925

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311583281.3A Active CN117291973B (en) 2023-11-24 2023-11-24 Quick robust initial positioning method based on laser point cloud

Country Status (1)

Country Link
CN (1) CN117291973B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112241002A (en) * 2020-10-11 2021-01-19 西北工业大学 Novel robust closed-loop detection method based on Karto SLAM
CN113345009A (en) * 2021-05-31 2021-09-03 湖南大学 Unmanned aerial vehicle dynamic obstacle detection method based on laser odometer
CN115728803A (en) * 2022-11-23 2023-03-03 武汉大学 System and method for continuously positioning urban driving vehicle
CN116148808A (en) * 2023-04-04 2023-05-23 江苏集萃清联智控科技有限公司 Automatic driving laser repositioning method and system based on point cloud descriptor
CN116485892A (en) * 2023-04-11 2023-07-25 中国科学院合肥物质科学研究院 Six-degree-of-freedom pose estimation method for weak texture object

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180136332A1 (en) * 2016-11-15 2018-05-17 Wheego Electric Cars, Inc. Method and system to annotate objects and determine distances to objects in an image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112241002A (en) * 2020-10-11 2021-01-19 西北工业大学 Novel robust closed-loop detection method based on Karto SLAM
CN113345009A (en) * 2021-05-31 2021-09-03 湖南大学 Unmanned aerial vehicle dynamic obstacle detection method based on laser odometer
CN115728803A (en) * 2022-11-23 2023-03-03 武汉大学 System and method for continuously positioning urban driving vehicle
CN116148808A (en) * 2023-04-04 2023-05-23 江苏集萃清联智控科技有限公司 Automatic driving laser repositioning method and system based on point cloud descriptor
CN116485892A (en) * 2023-04-11 2023-07-25 中国科学院合肥物质科学研究院 Six-degree-of-freedom pose estimation method for weak texture object

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Predicting Alignability of Point Cloud Pairs for Point Cloud Registration Using Features;André Kirsch等;《2022 12th International Conference on Pattern Recognition Systems (ICPRS)》;第1-6页 *
基于Kinect的移动侦测系统研究;陶伟;《中国优秀硕士学位论文全文数据库信息科技辑》(第09期);第 I138-871页 *
基于激光惯性信息融合的无人车定位与建图算法研究;周新杰;《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》(第08期);第C035-6页 *

Also Published As

Publication number Publication date
CN117291973A (en) 2023-12-26

Similar Documents

Publication Publication Date Title
CN109614985B (en) Target detection method based on densely connected feature pyramid network
US7995055B1 (en) Classifying objects in a scene
Hofmann et al. Background segmentation with feedback: The pixel-based adaptive segmenter
Ali et al. Critical analysis of DBSCAN variations
CN111999741B (en) Method and device for detecting roadside laser radar target
KR20200129314A (en) Object detection in very high-resolution aerial images feature pyramid network
CN112287906B (en) Template matching tracking method and system based on depth feature fusion
CN113484875B (en) Laser radar point cloud target hierarchical identification method based on mixed Gaussian ordering
CN115240149A (en) Three-dimensional point cloud detection and identification method and device, electronic equipment and storage medium
KR20150014646A (en) Method for segmenting aerial images based region and Computer readable storage medium for storing program code executing the same
Goel Realtime object detection using tensorflow an application of ml
KR101612779B1 (en) Method of detecting view-invariant, partially occluded human in a plurality of still images using part bases and random forest and a computing device performing the method
CN117291973B (en) Quick robust initial positioning method based on laser point cloud
Hodne et al. Detecting and suppressing marine snow for underwater visual slam
KR20220123142A (en) Labeling device and learning device
Bhardwaj et al. Spectral-spatial active learning with superpixel profile for classification of hyperspectral images
CN114913519B (en) 3D target detection method and device, electronic equipment and storage medium
Burlacu et al. Stereo vision based environment analysis and perception for autonomous driving applications
Huang et al. Semantic labeling and refinement of LiDAR point clouds using deep neural network in urban areas
Arief et al. Density-Adaptive Sampling for Heterogeneous Point Cloud Object Segmentation in Autonomous Vehicle Applications.
Pratama et al. Application of yolo (you only look once) v. 4 with preprocessing image and network experiment
CN113569600A (en) Method and device for identifying weight of object, electronic equipment and storage medium
CN113129332A (en) Method and apparatus for performing target object tracking
Ruz et al. Visual recognition to access and analyze people density and flow patterns in indoor environments
Giosan et al. Building pedestrian contour hierarchies for improving detection in traffic scenes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant