CN113885046A - Intelligent internet automobile laser radar positioning system and method for low-texture garage - Google Patents
Intelligent internet automobile laser radar positioning system and method for low-texture garage Download PDFInfo
- Publication number
- CN113885046A CN113885046A CN202111127722.XA CN202111127722A CN113885046A CN 113885046 A CN113885046 A CN 113885046A CN 202111127722 A CN202111127722 A CN 202111127722A CN 113885046 A CN113885046 A CN 113885046A
- Authority
- CN
- China
- Prior art keywords
- point
- point cloud
- map
- garage
- matching
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 230000003068 static effect Effects 0.000 claims abstract description 55
- 238000001914 filtration Methods 0.000 claims abstract description 41
- 238000000605 extraction Methods 0.000 claims abstract description 13
- 230000004927 fusion Effects 0.000 claims abstract description 7
- 230000006855 networking Effects 0.000 claims abstract description 6
- 230000009466 transformation Effects 0.000 claims description 22
- 238000009826 distribution Methods 0.000 claims description 13
- 238000004364 calculation method Methods 0.000 claims description 7
- 239000000463 material Substances 0.000 claims description 6
- 239000013598 vector Substances 0.000 claims description 4
- 230000011218 segmentation Effects 0.000 claims description 3
- 230000008859 change Effects 0.000 claims description 2
- 238000009795 derivation Methods 0.000 claims description 2
- 238000012795 verification Methods 0.000 claims description 2
- 230000001186 cumulative effect Effects 0.000 claims 1
- 238000000844 transformation Methods 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 9
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000005674 electromagnetic induction Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000002310 reflectometry Methods 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/93—Lidar systems specially adapted for specific applications for anti-collision purposes
- G01S17/931—Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/28—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/86—Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Automation & Control Theory (AREA)
- Traffic Control Systems (AREA)
- Optical Radar Systems And Details Thereof (AREA)
Abstract
The invention discloses an intelligent networking automobile laser radar positioning system and method for a low-texture garage, and the system comprises an initial pose determining module (100), a static object filtering module (200), a feature extraction module (300), a point cloud intensity-based odometer matching module (400), a radar point cloud frame and prior map matching module (500) and a fusion positioning module (600), wherein the initial pose of a vehicle is determined, the static object is filtered, the initial pose is fused, the radar point cloud frame and the prior map are matched to obtain the vehicle pose, and the static object points obtained after filtering are subjected to intensity-based odometer matching to obtain the vehicle pose, so that the positioning result of the vehicle is obtained. Compared with the prior art, the method can obtain a high-frequency and high-precision vehicle positioning result.
Description
Technical Field
The invention relates to the technical field of unmanned vehicle positioning, in particular to an intelligent network connection automobile laser radar positioning system for a garage.
Background
The intelligent networked automobile positioning solves the basic problem of 'where the vehicle is', and is the basis of applications such as navigation, obstacle avoidance, path planning and the like. With the continuous development of intelligent networked automobiles, the falling of safe driving, vehicle road coordination, intelligent traffic and the like all need real-time and accurate vehicle positioning. In an open outdoor environment, the intelligent networked automobile can rely on the differential positioning of the GNSS signals and the RTK to acquire real-time accurate position information. However, in an indoor garage environment where high-precision positioning is still required, satellite signals are blocked, and other positioning methods are required. Referring to a positioning method of an indoor robot, the existing technical scheme is more applied with positioning based on IMU inertial navigation, WiFi signal positioning, RFID label positioning, laser radar positioning and the like. These prior arts respectively have the following advantages and disadvantages:
1. the positioning based on IMU navigation can carry out high-frequency positioning, and meanwhile, the conversion information of the current state of the vehicle is known by combining equipment such as a wheel speed meter, a gyroscope and the like of the vehicle. However, this method has accumulated errors, and calibration cannot be performed after errors occur. Since the vehicle usually travels a long distance and has a certain speed, the application of precise positioning at a short distance to long-term travel causes a drift problem that cannot be corrected.
2. The RFID label location can wirelessly arouse close range wireless label through the electromagnetic induction principle, carries out information reading based on pasting a plurality of labels in different positions, but RFID can't accomplish real-time location, can only confirm within certain region, and the location range of label is on average at 4-6m, and communication signal can receive different interference in the garage simultaneously, leads to the precision variation of location, can't carry out accurate positioning.
3. WiFi positioning is a method widely used in indoor positioning research, and mainly includes two positioning methods, signal strength difference positioning and fingerprint positioning, but these methods require multiple APs in close range to obtain received signal strength and angle, but the garage space is large, and a large number of APs are installed, which may result in a certain cost, and the real-time performance and accuracy are not enough to meet the requirements of automobile driving.
4. The laser SLAM positioning has abundant research results, can be applied to intelligent networking vehicle positioning, and has the advantages of wide laser radar reaching range, high measurement accuracy, no light influence and strong anti-interference capability. However, the previous experimental results show that accurate positioning requires obvious geometric characteristic information as a basis for comparison, and the positioning effect error of the low-texture garage environment is large. By analyzing problems of scenes of a plurality of garages, a novel positioning method for intelligent networked automobiles and applied to a low-texture garage environment is provided.
Disclosure of Invention
The invention provides an intelligent network-connected automobile laser radar positioning system and method for a low-texture garage aiming at the common problems of a large number of garages, and the intelligent network-connected automobile positioning system suitable for the indoor garage with the characteristic of low texture is realized on the basis of a laser radar sensor and a prior map.
The invention is realized by the following technical scheme:
an intelligent networking automobile laser radar positioning system for a low-texture garage comprises an initial pose determining module 100, a static object filtering module 200, a feature extracting module 300, a point cloud intensity-based odometer matching module 400, a thunder frame and prior map matching module 500 and a fusion positioning module 600; wherein:
the initial pose determining module 100 is effective when the vehicle enters the garage for the first time, and is used for capturing the pose of the vehicle when the vehicle enters the garage for the first time;
the static object filtering module 200 is configured to filter out a dynamic object and a ground point in a point cloud frame obtained from a laser radar, so as to obtain a static object;
the feature extraction module 300 is configured to select a static object skeleton point by combining the strength difference and the geometric space difference when feature point extraction is performed;
the odometer matching module 400 based on the point cloud intensity is used for matching adjacent frames acquired by the laser radar odometer;
the radar point cloud frame and prior map matching module 500 is used for matching the radar point cloud frame with the accumulated pose recorded by the odometer in the prior map;
the fusion positioning module 600 is configured to fuse a vehicle pose obtained by matching the radar point cloud frame with the prior map at the initial pose and a vehicle pose obtained by matching the skeleton feature of the static object point obtained after filtering with an intensity-based odometer, so as to obtain a positioning result of the vehicle.
An intelligent networking automobile laser radar positioning method for a low-texture garage comprises the following specific steps:
step 1: judging whether the system is called for the first time, if so, turning to the step 2, and determining an initial pose; if not, turning to the step 4 to filter the static object;
step 2: determining an initial pose of the vehicle;
and step 3: sending the initial pose to a radar point cloud frame to be matched with a prior map, wherein the prior map adopts a map stream which is updated and loaded in real time;
and 4, step 4: performing static object filtering: by carrying out appropriate dynamic threshold filtering on data statistics in a point cloud library, at the current time t, extracting a static object from a point cloud frame acquired by a laser radar, specifically comprising the following steps: removing dynamic objects and ground points effectively at high speed through strength filtering;
and 5: performing skeleton feature extraction on the static object points obtained after filtering; judging whether the current situation is a gallery or not, and if the current situation is the gallery situation, extracting the characteristics of the view angle;
step 6: carrying out skeleton feature intensity-based odometer matching on the static object points obtained after filtering, namely matching two continuous frames of point clouds by using a laser radar odometer to obtain local pose transformation; aligning two frames of point clouds by predicting and estimating vehicle transformation, then taking the closest point in the two frames of point clouds as a corresponding point, and then continuously carrying out map descending iterative adjustment on the transformation until the error is minimum;
and 7: matching the radar point cloud frame and the prior map of the initial pose, and performing skeleton feature-based odometer matching on the static object points obtained after filtering.
Compared with the prior art, the method can obtain a high-frequency and high-precision vehicle positioning result.
Drawings
FIG. 1 is a schematic diagram of an intelligent networked automobile laser radar positioning system module for a low-texture garage according to the present invention;
FIG. 2 is an overall flowchart of an intelligent networked laser radar positioning method for a low-texture garage according to the present invention;
FIG. 3 is a schematic diagram illustrating the distance between an initial frame and an initial map point;
FIG. 4 is a schematic diagram of intensity distribution of points in a point cloud;
FIG. 5 is a diagram of threshold distributions in a garage data packet;
FIG. 6 is a graph of the results of static object filtering by intensity;
FIG. 7 is a schematic view of doors in a garage with insignificant geometric differences but significant strength differences;
FIG. 8 is a graph of distance distribution versus angle for the case of a gallery;
fig. 9 is a point cloud image sliced according to an angle.
Detailed Description
The technical solution of the present invention is further described in detail below with reference to the accompanying drawings and specific embodiments.
Fig. 1 is a schematic diagram of an intelligent networked automobile laser radar positioning system module for a low-texture garage according to the present invention. The system is divided into five parts, namely an initial pose determining module 100, a static object filtering module 200, a feature extracting module 300, a point cloud intensity-based odometer matching module 400, a thunder frame and prior map matching module 500 and a fusion positioning module 600. Wherein:
the initial pose determining module 100 is effective when the vehicle enters the garage for the first time, and is used for capturing the pose of the vehicle when the vehicle enters the garage for the first time;
the static object filtering module 200 is configured to filter out a dynamic object and a ground point in a point cloud frame obtained from a laser radar, so as to obtain a static object;
the feature extraction module 300 is configured to select a static object skeleton point by combining the strength difference and the geometric space difference when feature point extraction is performed;
the odometer matching module 400 based on the point cloud intensity is used for matching adjacent frames acquired by the laser radar odometer;
the radar point cloud frame and prior map matching module 500 is used for matching the radar point cloud frame with the accumulated pose recorded by the odometer in the prior map;
the fusion positioning module 600 is configured to fuse a vehicle pose obtained by matching the radar point cloud frame with the prior map at the initial pose and a vehicle pose obtained by matching the skeleton feature of the static object point obtained after filtering with an intensity-based odometer, so as to obtain a positioning result of the vehicle.
Fig. 2 is a flowchart illustrating an overall method for positioning an intelligent internet-connected laser radar of a vehicle for a low-texture garage according to the present invention. The process comprises the following specific steps:
step 1: judging whether the system is called for the first time, if so, turning to the step 2, and determining an initial pose; if not, turning to the step 4 to filter the static object;
step 2: determining an initial pose;
and step 3: sending the initial pose to a radar point cloud frame to be matched with a prior map; compared with other laser radar positioning methods based on prior maps, the method is different in that the global map is not used, and the map stream which is updated and loaded in real time is adopted. Because there are many repetitive structures in a garage, errors easily occur and cannot be corrected. The map flow updated in real time is used for comparison, and the map flow is established by an edge map and a plane map. The plan map is composed of walls, a floor, and a ceiling, and is used for general comparison. The edge map is composed of edge points in the garage, such as wall boundaries, column boundaries and the like, and is compared in more detail; then, the current frame is compared with the map to obtain more accurate matching result frame to be matched with the prior map, and the accumulated drift and the matching time can be reduced; in a low-texture gallery garage, real-time positioning with the frequency of 10Hz and the precision of 10cm can be obtained based on a priori map;
step 3.1: selecting a garage entrance map, and specifically operating the method comprising the following steps: selecting an entrance where a vehicle enters and travels, comparing an initial image frame entering the garage from the outside with local maps at fixed entrances of the garage, selecting local maps of different entrances for matching, and then taking a map with the highest matching degree as a currently compared garage entrance local map;
if the current map is matched with a map incorrectly, the matched map is changed into a local map nearby;
step 3.2: and then, registering the point cloud frame obtained by scanning the laser radar with the garage entrance prior map. In order to accelerate matching, the invention adopts a multithreading GICP algorithm to obtain an initial pose. Firstly, the point cloud at the current t moment is down-sampled to obtain a point cloud P0={p i1.. ·, n and map frameA comparison is made, wherein,a map formed by the skeleton points is shown,the map showing the plane point composition is shown in fig. 2, which is a schematic diagram showing the point-plane distance between the initial frame and the initial map. Wherein A is a point on a horizontal line a after initial frame down-sampling, B1Is a line b corresponding to the line where A is located1At the closest point of (A), B2Is the same line b1Upper closest point, B3Is an adjacent line b2Upper nearest point, B1、B2And B3Forming a plane (grey map frame). Calculating the distance d between the point and the surface, and the formula is as follows:
wherein,is an initial frame A to a corresponding point B1The vector of (a) is determined,vectors formed for adjacent three points in the map frame; continuously iterating and matching, and continuously optimizing transformation to minimize the distance;
after the transformation result is obtained, the subsequent frames are compared with the map, and the vehicle transformation is accumulated according to the following formula:
wherein T represents the transformation of the vehicle, the upper right corner mark represents the transformation under different coordinates, M represents the coordinate system of the prior map, L represents the coordinate system of the laser radar, and T represents the time;
and obtaining the global coordinate in the prior map coordinate through transformation in the formula. The initial pose helps to merge the vehicle coordinate system into the coordinate system of the prior map, so that the vehicle can improve the accuracy by using the prior map data and acquire the relative positions of other objects on the map.
And 4, step 4: performing static object filtering: carrying out appropriate dynamic threshold filtering on data statistics in a point cloud library to obtain a relatively static object; at the current time t, extracting static objects from a point cloud frame acquired by a laser radar, so as to reduce the interference of dynamic objects in the garage, and specifically, removing the dynamic objects and ground points effectively at high speed by strength filtering;
step 4.1: the intensity of the point cloud is transformed. The intensity value of the point cloud reflects the reflectivity of the measured object and can be used as a standard for distinguishing the object. The information returned by each point in the point cloud is (x,y,z,I) And (x, y, z) represents the reflection intensity I calculated for each point with respect to the physical coordinates of the lidar, as follows:
wherein, rho is the material of testee itself, and r is the distance of testee apart from laser radar, and lambda is the reflection angle of testee, and the derivation through the formula turns into the material direct correlation directly proportional of testee itself with the value of point cloud intensity:
step 4.2: and validity verification, namely extracting the point cloud frame to verify whether the filtering method effectively partitions static objects and non-static objects.
As shown in fig. 4, the intensity distribution of the points in the point cloud. The reliability of the intensity filtering method is analyzed by taking fig. 3 as an example. And obtaining the analysis result of the distribution of all point intensities in the whole frame of point cloud through the data collected in the plurality of garage. Then, the ground in the point cloud is segmented by a traditional and accurate method, wherein the ground points are concentrated in an area with small point cloud intensity, meanwhile, the vehicle points and the static objects are manually segmented, the point cloud intensity of the vehicle points is obviously found to be small, a small part of the points of the static objects are distributed in the area with small point cloud intensity, a large part of the points of the static objects are distributed in the area with large point cloud intensity, and the yellow fold line and the blue fold line are almost overlapped in the area with large point cloud intensity, which indicates that the interference of the non-static objects does not exist at the moment. A suitable threshold value may be selected for segmentation. It should be noted that, although some point clouds of static objects are eliminated, the point clouds do not have a great influence on the overall situation.
Step 4.3: calculating a proper threshold value as a reference for point cloud intensity filtering, wherein the threshold value is dynamic and different proper threshold values can be selected according to different environments; because the material of the objects in the same garage is generally not changed greatly, the database is analyzed when the prior map is constructed, the slope in the intensity distribution of each frame of point cloud is calculated, and the obvious concave value of the intensity change in the point cloud distribution is selected. Fig. 5 shows an example of threshold distribution in a garage packet. The threshold extracted from the database is normal distribution, so the mu value in the normal distribution is selected as the threshold of filtering. The effect of static object filtering with this appropriate threshold is shown. Wherein, the irregular is a non-static point, and the regular is a static object point. FIG. 6 is a diagram illustrating an example of a filtering result of a static object;
and 5: performing skeleton feature extraction on the static object points obtained after filtering; because the number of points in the point cloud is large, and a high-frequency comparison result is needed during comparison, the point cloud needs to be subjected to feature extraction, and points which have obvious features and are easy to match are selected as feature points for comparison; judging whether the current situation is a gallery due to the existence of a gallery problem in the garage, and if the current situation is the gallery situation, extracting the characteristics of the view angle, so that the characteristic points are distributed in an area with higher reference value; fig. 7 is a schematic view of a door in a garage with a slight geometric difference but a significant strength difference.
Step 5.1: and calculating a feature descriptor. During extraction, not only commonly used geometric differences but also intensity differences are considered, so that the outlines of different objects with unobvious geometric differences in the point cloud can be obtained, and valuable reference feature points are obtained in a low-texture garage.
At the comparison calculation point piWhen the feature of (1) is that each point is pjE to Np, calculating point cloud piC, the formula is as follows:
wherein N ispA distance value representing a set of 10 points around the current point p, r represents a distance between the current point p and the lidar origin, I represents an intensity, α is a weight, r is a distance between the current point p and the lidar origin, andj-rirepresenting two adjacent points pi,pjDistance difference between them, Ij-IiRepresenting two adjacent points pi,pjThe difference in reflection intensity between, alpha is the weight, riRepresents a point piDistance of (1), IiRepresents a point piThe reflection intensity of (a);
step 5.2: and judging the situation of the gallery. Specific analysis is carried out on the long corridor condition of the point cloud, the long corridor condition represents that a vehicle walks in a long corridor, the distance between the front scene and the rear scene is far, most of the scenes are wall surfaces without great reference meaning, the probability of unsuccessful matching is high, and more concentrated feature extraction can be carried out by focusing attention on the positions of two sides of the point cloud. Selecting a view angle according to a horizontal angle to extract features, as shown in fig. 8, a schematic diagram of a relationship between distance distribution and angle under a gallery condition; including the maximum distance from the point cloud midpoint and the minimum distance from the point cloud midpoint. Analysis can obtain: the maximum distance value is obtained by dividing different angle areas, judging the situation of the gallery and extracting features through angles.
Fig. 9 is an exemplary diagram of a point that is sliced according to an angle. The point clouds at the gray parts are sparse and do not have great reference value to be discarded, and the point clouds at the left side and the right side are dense and are close to each other, so that the feature description of the point clouds is clearer, attention can be focused on a valuable area, and the effectiveness of feature points is enhanced.
Step 6: and performing skeleton feature intensity-based odometer matching on the static object points obtained after filtering, namely matching two continuous frames of point clouds by using a laser radar odometer to obtain local pose transformation. And aligning the two frames of point clouds by predicting and estimating vehicle transformation, then taking the closest point in the two frames of point clouds as a corresponding point, and then continuously carrying out map descent iterative adjustment on the transformation until the error is minimum. Due to the relatively narrow space of the low-texture garage, the distance difference between certain points in the point cloud is small, so that the estimation error is not obvious. The error calculation is carried out by considering the addition of the intensity information, and the object material types of the points are distinguished, so that the accuracy of point cloud matching is improved. The principle of the point-to-line distance calculation is to calculate the distance from a point of the current frame to the nearest line of the previous frame. The line is composed of the nearest point i on the same straight line in the radar and the nearest point j on the adjacent straight line, so that the point corresponding to the current point is in the two points. When the error is calculated, the absolute value of the intensity difference of a smaller point is taken as an error function, and the error calculation formula is as follows:
where e represents the match error term, ωrAnd ωIRepresenting weights of a distance error and an intensity error, C represents coordinates, coordinate subtraction represents a vector, an upper right corner mark L represents a point in a radar coordinate system, a lower right corner mark (t, I) of each point represents information of a calculation point I at the moment t, I represents point cloud intensity, and r represents a distance from a current point to a coordinate origin;
and during comparison, the error term is the minimum geometric distance and the intensity difference of the matching points of the front frame and the rear frame, and continuous iteration is carried out to obtain the vehicle pose transformation under a laser radar coordinate system so as to obtain the high-frequency vehicle local coordinate transformation of 10 Hz.
And 7: and fusing the vehicle pose obtained by matching the radar point cloud frame with the prior map at the initial pose and the vehicle pose obtained by matching the framework features of the static object points obtained after filtering based on the intensity of the odometer to obtain the positioning result of the vehicle.
The invention has the advantages that:
the invention provides a positioning method in a vehicle garage based on a prior map, which converts local coordinates in the traditional method into global coordinates and applies a map stream formed by a plane map, an edge map and a path; the invention provides a filtering algorithm based on intensity to obtain a static object, designs a dynamic adaptable segmentation method, integrates intensity difference to extract framework points of the static object, and increases extractable characteristic points; extracting reasonable and credible static target characteristic points based on the view division for the gallery problem; and carrying out high-precision point cloud matching by combining high-frequency odometer matching and a prior map of the strength to obtain accurate pose transformation.
The foregoing is only a preferred embodiment of the present application, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present application, and these modifications and variations should also be regarded as the protection scope of the present invention.
Claims (6)
1. An intelligent networking automobile laser radar positioning system for a low-texture garage is characterized by comprising an initial pose determining module (100), a static object filtering module (200), a feature extracting module (300), a point cloud intensity-based odometer matching module (400), a radar point cloud frame and prior map matching module (500) and a fusion positioning module (600); wherein:
the initial pose determining module (100) is effective when the vehicle enters the garage for the first time and is used for capturing the pose when the vehicle enters the garage for the first time;
the static object filtering module (200) is used for filtering out dynamic objects and ground points in a point cloud frame acquired from a laser radar to obtain static objects;
the characteristic extraction module (300) is used for selecting static object skeleton points by combining the strength difference and the geometric space difference during characteristic point extraction;
the odometer matching module (400) based on the point cloud intensity is used for matching between adjacent frames acquired by the laser radar odometer;
the radar point cloud frame and prior map matching module (500) is used for matching the radar point cloud frame with the accumulated pose recorded by the odometer in the prior map;
the fusion positioning module (600) is used for fusing the vehicle pose obtained by matching the radar point cloud frame and the prior map with the initial pose and the vehicle pose obtained by matching the framework features of the static object points obtained after filtering with the strength-based odometer to obtain the positioning result of the vehicle.
2. An intelligent networking automobile laser radar positioning method for a low-texture garage is characterized by comprising the following specific steps:
step 1: judging whether the system is called for the first time, if so, turning to the step 2, and determining an initial pose; if not, turning to the step 4 to filter the static object;
step 2: determining an initial pose of the vehicle;
and step 3: sending the initial pose to a radar point cloud frame to be matched with a prior map, wherein the prior map adopts a map stream which is updated and loaded in real time;
and 4, step 4: performing static object filtering: by carrying out appropriate dynamic threshold filtering on data statistics in a point cloud library, at the current time t, extracting a static object from a point cloud frame acquired by a laser radar, specifically comprising the following steps: removing dynamic objects and ground points effectively at high speed through strength filtering;
and 5: performing skeleton feature extraction on the static object points obtained after filtering; judging whether the current situation is a gallery or not, and if the current situation is the gallery situation, extracting the characteristics of the view angle;
step 6: carrying out skeleton feature intensity-based odometer matching on the static object points obtained after filtering, namely matching two continuous frames of point clouds by using a laser radar odometer to obtain local pose transformation; aligning two frames of point clouds by predicting and estimating vehicle transformation, then taking the closest point in the two frames of point clouds as a corresponding point, and then continuously carrying out map descending iterative adjustment on the transformation until the error is minimum;
and 7: matching the radar point cloud frame and the prior map of the initial pose, and performing skeleton feature-based odometer matching on the static object points obtained after filtering.
3. The method for positioning the intelligent internet-connected automobile laser radar for the low-texture garage as claimed in claim 2, wherein the step 3 further comprises the following specific operations:
step 3.1: selecting a garage entrance map, and specifically operating the method comprising the following steps: selecting an entrance where a vehicle enters and travels, comparing an initial image frame entering the garage from the outside with local maps at fixed entrances of the garage, selecting local maps of different entrances for matching, and then taking a map with the highest matching degree as a currently compared garage entrance local map;
if the current map is matched with the map incorrectly, the matched map is changed into a local map nearby;
step 3.2: then, registering the point cloud frame obtained by scanning the laser radar with a garage entrance prior map; firstly, the point cloud at the current t moment is down-sampled to obtain a point cloud P0={pi1.. ·, n and map frame A comparison is made, wherein,a map formed by the skeleton points is shown,a map representing the composition of planar points;
after obtaining the transformation result, proceedComparing subsequent frames to a map, calculating cumulative vehicle transformationsThe formula is as follows:
wherein M represents a coordinate system of a prior map, L represents a coordinate system of a laser radar, and t represents time;
and combining the vehicle coordinate system into a coordinate system of the prior map through vehicle transformation to obtain a global coordinate in the prior map coordinate, and acquiring the relative positions of other objects on the map.
4. The method for positioning the intelligent internet-connected automobile laser radar for the low-texture garage as claimed in claim 2, wherein the step 4 further comprises the following specific operations:
step 4.1: converting the intensity of the point cloud, wherein the information returned by each point in the point cloud is (x, y, z, I), (x, y, z) represents a physical coordinate relative to the laser radar, and the reflection intensity I of each point is calculated according to the following formula:
wherein, rho is the material of testee itself, and r is the distance of testee apart from laser radar, and lambda is the reflection angle of testee, and the derivation through the formula turns into the material direct correlation directly proportional of testee itself with the value of point cloud intensity:
step 4.2: validity verification, namely extracting a point cloud frame to verify the effective segmentation of the static object and the non-static object;
step 4.3: calculating a proper threshold value as a reference of point cloud intensity filtering, wherein the threshold value is dynamic, and selecting proper different threshold values according to different environments; calculating the slope in the intensity distribution of each frame of point cloud, selecting an obvious concave value of intensity change in the point cloud distribution as a filtering threshold value, and filtering the static object through the proper threshold value.
5. The method for positioning the intelligent internet-connected automobile laser radar for the low-texture garage as claimed in claim 2, wherein the step 4 further comprises the following specific operations:
step 5.1: computing feature descriptors ciThe formula is as follows:
wherein N ispA distance value representing a set of 10 points around the current point p, r represents a distance between the current point p and the lidar origin, I represents an intensity, α is a weight, r is a distance between the current point p and the lidar origin, andj-rirepresenting two adjacent points pi,pjDistance difference between them, Ij-IiRepresenting two adjacent points pi,pjThe difference in reflection intensity between, alpha represents the weight, riRepresents a point piI and j respectively represent the nearest point on the same straight line and the nearest point on the adjacent straight line in the radar, IiRepresents a point piThe reflection intensity of (a);
step 5.2: and judging the situation of the gallery in the point cloud shot by the radar, and selecting the view angle according to the horizontal angle for the situation of the gallery to extract the features.
6. The method for intelligently positioning the internet-connected automobile laser radar for the low-texture garage as claimed in claim 2, wherein the error calculation formula in the step 6 is as follows:
where e represents the match error term, ωrAnd ωIRepresenting weights of a distance error and an intensity error, C represents coordinates, coordinate subtraction represents a vector, an upper right corner mark L represents a point in a radar coordinate system, a lower right corner mark (t, I) of each point represents information of a calculation point I at the moment t, I represents point cloud intensity, and r represents a distance from a current point to a coordinate origin;
and during comparison, the error term is the minimum geometric distance and the intensity difference of the matching points of the front frame and the rear frame, and continuous iteration is carried out to obtain the vehicle pose transformation under a laser radar coordinate system so as to obtain the high-frequency vehicle local coordinate transformation of 10 Hz.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111127722.XA CN113885046B (en) | 2021-09-26 | 2021-09-26 | Intelligent network-connected automobile laser radar positioning system and method for low-texture garage |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111127722.XA CN113885046B (en) | 2021-09-26 | 2021-09-26 | Intelligent network-connected automobile laser radar positioning system and method for low-texture garage |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113885046A true CN113885046A (en) | 2022-01-04 |
CN113885046B CN113885046B (en) | 2024-06-18 |
Family
ID=79006749
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111127722.XA Active CN113885046B (en) | 2021-09-26 | 2021-09-26 | Intelligent network-connected automobile laser radar positioning system and method for low-texture garage |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113885046B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114660589A (en) * | 2022-03-25 | 2022-06-24 | 中国铁建重工集团股份有限公司 | Method, system and device for positioning underground tunnel |
CN114674308A (en) * | 2022-05-26 | 2022-06-28 | 之江实验室 | Vision-assisted laser gallery positioning method and device based on safety exit indicator |
CN116124161A (en) * | 2022-12-22 | 2023-05-16 | 东南大学 | LiDAR/IMU fusion positioning method based on priori map |
CN116299500A (en) * | 2022-12-14 | 2023-06-23 | 江苏集萃清联智控科技有限公司 | Laser SLAM positioning method and device integrating target detection and tracking |
CN117310772A (en) * | 2023-11-28 | 2023-12-29 | 电子科技大学 | Electromagnetic target positioning method based on map information visual distance or non-visual distance detection |
CN117367412A (en) * | 2023-12-07 | 2024-01-09 | 南开大学 | Tightly-coupled laser inertial navigation odometer integrating bundle set adjustment and map building method |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105628026A (en) * | 2016-03-04 | 2016-06-01 | 深圳大学 | Positioning and posture determining method and system of mobile object |
CN109084732A (en) * | 2018-06-29 | 2018-12-25 | 北京旷视科技有限公司 | Positioning and air navigation aid, device and processing equipment |
US20200284590A1 (en) * | 2019-03-05 | 2020-09-10 | DeepMap Inc. | Distributed processing of pose graphs for generating high definition maps for navigating autonomous vehicles |
-
2021
- 2021-09-26 CN CN202111127722.XA patent/CN113885046B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105628026A (en) * | 2016-03-04 | 2016-06-01 | 深圳大学 | Positioning and posture determining method and system of mobile object |
CN109084732A (en) * | 2018-06-29 | 2018-12-25 | 北京旷视科技有限公司 | Positioning and air navigation aid, device and processing equipment |
US20200284590A1 (en) * | 2019-03-05 | 2020-09-10 | DeepMap Inc. | Distributed processing of pose graphs for generating high definition maps for navigating autonomous vehicles |
Non-Patent Citations (2)
Title |
---|
SCHAUERJ ET AL.: "Removing Non-Static Objects from 3D Laser Scan Data", JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING (ISPRS), 31 December 2018 (2018-12-31) * |
任帅;张文君;: "车载LIDAR技术误差分析与质量控制", 西南科技大学学报, no. 04, 15 December 2016 (2016-12-15) * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114660589A (en) * | 2022-03-25 | 2022-06-24 | 中国铁建重工集团股份有限公司 | Method, system and device for positioning underground tunnel |
CN114660589B (en) * | 2022-03-25 | 2023-03-10 | 中国铁建重工集团股份有限公司 | Method, system and device for positioning underground tunnel |
CN114674308A (en) * | 2022-05-26 | 2022-06-28 | 之江实验室 | Vision-assisted laser gallery positioning method and device based on safety exit indicator |
CN116299500A (en) * | 2022-12-14 | 2023-06-23 | 江苏集萃清联智控科技有限公司 | Laser SLAM positioning method and device integrating target detection and tracking |
CN116299500B (en) * | 2022-12-14 | 2024-03-15 | 江苏集萃清联智控科技有限公司 | Laser SLAM positioning method and device integrating target detection and tracking |
CN116124161A (en) * | 2022-12-22 | 2023-05-16 | 东南大学 | LiDAR/IMU fusion positioning method based on priori map |
CN117310772A (en) * | 2023-11-28 | 2023-12-29 | 电子科技大学 | Electromagnetic target positioning method based on map information visual distance or non-visual distance detection |
CN117310772B (en) * | 2023-11-28 | 2024-02-02 | 电子科技大学 | Electromagnetic target positioning method based on map information visual distance or non-visual distance detection |
CN117367412A (en) * | 2023-12-07 | 2024-01-09 | 南开大学 | Tightly-coupled laser inertial navigation odometer integrating bundle set adjustment and map building method |
CN117367412B (en) * | 2023-12-07 | 2024-03-29 | 南开大学 | Tightly-coupled laser inertial navigation odometer integrating bundle set adjustment and map building method |
Also Published As
Publication number | Publication date |
---|---|
CN113885046B (en) | 2024-06-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113885046A (en) | Intelligent internet automobile laser radar positioning system and method for low-texture garage | |
US11579623B2 (en) | Mobile robot system and method for generating map data using straight lines extracted from visual images | |
CN110859044B (en) | Integrated sensor calibration in natural scenes | |
US9846812B2 (en) | Image recognition system for a vehicle and corresponding method | |
US7027615B2 (en) | Vision-based highway overhead structure detection system | |
US20040125207A1 (en) | Robust stereo-driven video-based surveillance | |
KR101569919B1 (en) | Apparatus and method for estimating the location of the vehicle | |
CN105667518A (en) | Lane detection method and device | |
Nguyen et al. | Compensating background for noise due to camera vibration in uncalibrated-camera-based vehicle speed measurement system | |
CN112740225B (en) | Method and device for determining road surface elements | |
US11151729B2 (en) | Mobile entity position estimation device and position estimation method | |
CN114459467B (en) | VI-SLAM-based target positioning method in unknown rescue environment | |
EP3593322B1 (en) | Method of detecting moving objects from a temporal sequence of images | |
CN115273034A (en) | Traffic target detection and tracking method based on vehicle-mounted multi-sensor fusion | |
CN115144828A (en) | Automatic online calibration method for intelligent automobile multi-sensor space-time fusion | |
CN113971697B (en) | Air-ground cooperative vehicle positioning and orientation method | |
CN113838129B (en) | Method, device and system for obtaining pose information | |
Oniga et al. | A fast ransac based approach for computing the orientation of obstacles in traffic scenes | |
CN116358547A (en) | Method for acquiring AGV position based on optical flow estimation | |
CN103473787A (en) | On-bridge-moving-object detection method based on space geometry relation | |
CN110488320A (en) | A method of vehicle distances are detected using stereoscopic vision | |
CN118864594A (en) | Sequential fusion positioning and mapping method based on multistage dynamic point cloud processing | |
Wang et al. | Research on visual odometry based on large-scale aerial images taken by UAV | |
CN115980754A (en) | Vehicle detection and tracking method fusing sensor information | |
Wang et al. | Kalman Filter–Based Tracking System for Automated Inventory of Roadway Signs |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |