CN114049542A - Fusion positioning method based on multiple sensors in dynamic scene - Google Patents
Fusion positioning method based on multiple sensors in dynamic scene Download PDFInfo
- Publication number
- CN114049542A CN114049542A CN202111253666.4A CN202111253666A CN114049542A CN 114049542 A CN114049542 A CN 114049542A CN 202111253666 A CN202111253666 A CN 202111253666A CN 114049542 A CN114049542 A CN 114049542A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- dynamic
- point
- static environment
- dynamic object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 30
- 230000004927 fusion Effects 0.000 title claims abstract description 20
- 230000003068 static effect Effects 0.000 claims abstract description 65
- 238000012545 processing Methods 0.000 claims abstract description 14
- 238000001914 filtration Methods 0.000 claims abstract description 13
- 238000013507 mapping Methods 0.000 claims abstract description 4
- 238000009826 distribution Methods 0.000 claims description 55
- 238000009827 uniform distribution Methods 0.000 claims description 19
- 239000004743 Polypropylene Substances 0.000 claims description 9
- 238000011156 evaluation Methods 0.000 claims description 8
- 238000005070 sampling Methods 0.000 claims description 6
- 230000010339 dilation Effects 0.000 claims description 4
- -1 polypropylene Polymers 0.000 claims description 3
- 229920001155 polypropylene Polymers 0.000 claims description 3
- 239000000126 substance Substances 0.000 claims description 3
- 230000008901 benefit Effects 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 11
- 238000001514 detection method Methods 0.000 description 5
- 230000007613 environmental effect Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 210000004556 brain Anatomy 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000005315 distribution function Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 210000000746 body region Anatomy 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 239000006185 dispersion Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000011435 rock Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C3/00—Measuring distances in line of sight; Optical rangefinders
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/86—Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/93—Lidar systems specially adapted for specific applications for anti-collision purposes
- G01S17/931—Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Electromagnetism (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Networks & Wireless Communication (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention relates to a fusion positioning method based on multiple sensors in a dynamic scene, which comprises the following steps: s1, acquiring dynamic event point cloud acquired by a dynamic vision sensor camera and environment point cloud data acquired by a laser radar; s2, processing the dynamic event point cloud and filtering noise events to obtain a dynamic object image; s3, identifying the dynamic object in the dynamic object image, and selecting a dynamic object area; s4, mapping the dynamic object area to the environment point cloud data, and removing the dynamic object point cloud to obtain a static environment point cloud; and S5, registering the static environment point cloud and the map features to acquire positioning information. Compared with the prior art, the method has the advantages that the positioning precision and the robustness are greatly improved.
Description
Technical Field
The invention relates to the technical field of automatic driving, in particular to a fusion positioning method based on multiple sensors in a dynamic scene.
Background
The automatic driving vehicle runs in an unsupervised state, the driving brain of the automatic driving can replace a driver, human observation (a sensor system), thinking (the driving brain) and operation (planning control) are simulated, and then wheels are controlled to complete action tasks. The main functions involved in automatic driving include automatic driving functions such as self-vehicle driving positioning, moving target detection, route detection, path planning, vehicle tracking control and the like.
In an autonomous navigation system for vehicles, accurate positioning is an important component for completing path planning, trajectory tracking control, and navigation to a target location. Currently, lidar and cameras have been widely used for environmental perception. These artificial intelligence sensors have been used to extract and identify environmental targets and apply landmark matching to enable autonomous positioning of vehicles. However, there are inevitably many moving objects in a real environment, such as pedestrians, automobiles, bicycles, and the like. And the false feature matching of the appearance of the moving object and the map feature can cause the serious reduction of the positioning accuracy. Therefore, improving the fusion positioning accuracy of the laser radar and the camera in a dynamic scene becomes an important research branch.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a fusion positioning method based on multiple sensors in a dynamic scene.
The purpose of the invention can be realized by the following technical scheme:
a fusion positioning method based on multiple sensors in a dynamic scene comprises the following steps:
s1, acquiring dynamic event point cloud acquired by a dynamic vision sensor camera and environment point cloud data acquired by a laser radar;
s2, processing the dynamic event point cloud and filtering noise events to obtain a dynamic object image;
s3, identifying the dynamic object in the dynamic object image, and selecting a dynamic object area;
s4, mapping the dynamic object area to the environment point cloud data, and removing the dynamic object point cloud to obtain a static environment point cloud;
and S5, registering the static environment point cloud and the map features to acquire positioning information.
Preferably, step S2 is specifically:
s21, projecting the dynamic event point clouds at different sampling moments in a sampling time period to the same image plane to obtain a dynamic event point cloud image;
s22, filtering based on the event point cloud thickness of each pixel point in the dynamic event point cloud image, and filtering noise point cloud;
and S23, carrying out binarization processing on the dynamic event point cloud image with the noise point cloud filtered out to obtain a dynamic object image.
Preferably, step S22 is specifically:
s221, counting the number of event point clouds of each pixel point in the dynamic event point cloud image, and recording the number as the point cloud thickness n of the ith pixel pointiI is 0, 1, … …, and L-1, wherein L is the total number of pixel points;
s222, calculating the probability P that the number of event point clouds of the ith pixel point accounts for the whole event pointiAnd average event point cloud thickness μT:
n=n1+n2+…+nL-1
S223, sorting the point cloud thicknesses of the L pixel points from big to small so as toFinding a point cloud thickness threshold n for a target that distinguishes between noisy and dynamic object point cloudsTWherein, in the step (A),
j represents the j-th pixel point ordered from big to small according to the point cloud thickness, w0(nT) Representing the probability, w, of a dynamic object region1(nT) Representing the probability of a noisy region, mu0(nT) Mean point cloud thickness, μ, representing a dynamic object region1(nT) The average point cloud thickness representing the noise region,point cloud thickness variance of the dynamic object area and the noise area;
s224, enabling the point cloud thickness in the dynamic event point cloud image to be smaller than nTThe points of (2) are taken as noise points, and the noise points are filtered.
Preferably, step S23 performs binarization processing to obtain a dynamic object image according to the following formula:
wherein f (x, y) represents the gray value of the pixel point with the coordinate (x, y), and n (x, y) represents the point cloud thickness of the pixel point with the coordinate (x, y).
Preferably, after the dynamic object image is acquired in step S23, an expansion convolution process is further performed, and the gray value of each pixel point is replaced by the minimum gray value in the convolution kernel during the expansion convolution process.
Preferably, step S3 identifies the dynamic object in the dynamic object image using the YOLO-small network.
Preferably, step S5 is specifically:
s51, establishing a static environment point cloud error model, and performing error evaluation on the static environment point cloud by using the static environment point cloud error model;
and S52, determining the error weight of each point based on the static environment point cloud error evaluation result, and registering the static environment point cloud and the map features based on the error weight to obtain positioning information.
Preferably, the static environment point cloud error model includes a uniform distribution error model and a gaussian distribution error model, the uniform distribution error model is used for calculating a uniform distribution error of each point in the static environment point cloud, and the gaussian distribution error model is used for a gaussian distribution error of each point in the static environment point cloud.
Preferably, the error weight of each point in the static environment point cloud in step S52 is:
wherein the content of the first and second substances,the error weight of the ith point in the static environment point cloud,the weight corresponding to the uniformly distributed error model of the ith point in the static environment point cloud,the weight corresponding to the i point Gaussian distribution error model in the static environment point cloud, PPi evenThe information entropy, P, of the ith point in the static environment point cloud is uniformly distributedi evenThe relative probability distribution information entropy of the ith point in the current static environment point cloud in the uniform distribution error model is obtained; PP (polypropylene)i gaussianIs the information entropy of the i point Gaussian distribution in the static environment point cloud, Pi gaussianThe relative probability distribution information entropy of the ith point of the current static environment point cloud in the Gaussian distribution error model is obtained.
Preferably, Pi evenAnd Pi gaussianRespectively as follows:
Pi even=ln[λ1iλ2iλ3idi 3]
λ1i=θdi,λ2i=φdi,λ2i=hdi
wherein theta is the horizontal angular resolution of the laser radar, phi is the vertical angular resolution of the laser radar, h is the air dielectric constant, diThe scanning distance of the ith point in the static environment point cloud is obtained.
Compared with the prior art, the invention has the following advantages:
(1) due to the interference of a moving target in a dynamic scene, the laser point cloud has a positioning and registering error; meanwhile, sparsity and multi-scale characteristics exist among multi-line beams of the laser radar, so that space distribution errors exist in the characteristics of point cloud extraction, and point cloud registration and positioning errors are caused, therefore, the dynamic event point cloud acquired by the dynamic vision sensor camera and the environmental point cloud data acquired by the laser radar are subjected to fusion positioning, and a dynamic object in the environmental point cloud data is proposed by adopting the dynamic event point cloud, so that the laser positioning precision and robustness are improved;
(2) the method establishes a static environment point cloud error model to carry out error evaluation on the static environment point cloud, optimizes the matching weight of the characteristics and improves the positioning precision.
Drawings
FIG. 1 is a general flow chart diagram of a multi-sensor based fusion positioning method in a dynamic scenario according to the present invention;
FIG. 2 is an architecture diagram of a multi-sensor based fusion positioning method in a dynamic scenario according to the present invention;
FIG. 3 is a schematic diagram of event point recognition by a dynamic vision sensor camera for a bicycle in accordance with an embodiment of the present invention.
FIG. 4 is a schematic diagram of processing a dynamic event point cloud and filtering noise events to obtain a dynamic object image according to an embodiment of the present invention;
FIG. 5 is a network diagram of a ResN structure in a YOLO-small network;
FIG. 6 is a network diagram of a feature layer and a Neck layer in a YOLO-small network;
FIG. 7 is a schematic diagram of a characteristic error distribution of static lidar points;
FIG. 8 is a schematic illustration of the stereo error distribution of the uniform distribution and the Gaussian distribution;
FIG. 9 is a schematic diagram of an error model for uniform and Gaussian distributions;
fig. 10 is a distribution probability curve of the uniform distribution and the gaussian distribution.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. Note that the following description of the embodiments is merely a substantial example, and the present invention is not intended to be limited to the application or the use thereof, and is not limited to the following embodiments.
Examples
As shown in fig. 1 and fig. 2, the present embodiment provides a fusion positioning method based on multiple sensors in a dynamic scene, where the method includes:
s1, acquiring dynamic event point cloud acquired by a dynamic vision sensor camera and environment point cloud data acquired by a laser radar;
s2, processing the dynamic event point cloud and filtering noise events to obtain a dynamic object image;
s3, identifying the dynamic object in the dynamic object image, and selecting a dynamic object area;
s4, mapping the dynamic object area to the environment point cloud data, and removing the dynamic object point cloud to obtain a static environment point cloud;
and S5, registering the static environment point cloud and the map features to acquire positioning information.
Step S2 specifically includes:
s21, projecting the dynamic event point clouds at different sampling moments in a sampling time period to the same image plane to obtain a dynamic event point cloud image;
s22, filtering based on the event point cloud thickness of each pixel point in the dynamic event point cloud image, and filtering noise point clouds, wherein the filtering adopts an Otsu method;
and S23, carrying out binarization processing on the dynamic event point cloud image with the noise point cloud filtered out to obtain a dynamic object image.
Step S22 specifically includes:
s221, counting the number of event point clouds of each pixel point in the dynamic event point cloud image, and recording the number as the point cloud thickness n of the ith pixel pointiI is 0, 1, … …, and L-1, wherein L is the total number of pixel points;
s222, calculating the probability P that the number of event point clouds of the ith pixel point accounts for the whole event pointiAnd average event point cloud thickness μT:
n=n1+n2+…+nL-1
S223, sorting the point cloud thicknesses of the L pixel points from big to small so as toFinding a point cloud thickness threshold n for a target that distinguishes between noisy and dynamic object point cloudsTWherein, in the step (A),
j represents the j-th pixel point ordered from big to small according to the point cloud thickness, w0(nT) Representing the probability, w, of a dynamic object region1(nT) Representing the probability of a noisy region, mu0(nT) Mean point cloud thickness, μ, representing a dynamic object region1(nT) The average point cloud thickness representing the noise region,point cloud thickness variance of the dynamic object area and the noise area;
s224, enabling the point cloud thickness in the dynamic event point cloud image to be smaller than nTThe points of (2) are taken as noise points, and the noise points are filtered.
Step S23 is a binarization process to obtain a dynamic object image according to the following formula:
wherein f (x, y) represents the gray value of the pixel point with the coordinate (x, y), and n (x, y) represents the point cloud thickness of the pixel point with the coordinate (x, y).
Step S23 is to perform dilation convolution processing after acquiring the dynamic object image, and replace the gray value of each pixel point with the minimum gray value in the convolution kernel during dilation convolution processing.
Specifically, in the present embodiment, as shown in fig. 3, the DVS camera captures a bicycle and projects an event point cloud onto an image plane. There is noise at the image plane due to environmental influences. And the image with the noise points may severely enlarge the area of the moving object, which may challenge the identification of the moving object. The number of event points for a DVS camera is related to the speed of movement and the rate of change of color over a very short period of time. Therefore, the Otsu method described above aims at calculating the judgment threshold value and implementing the filtering operation. The threshold n varies with the speed of movement of the objectTChanges occur in different dynamic scenarios. The Otsu method of the invention reserves the event body characteristics of the moving object and filters noise points. Then, the DVS image is subjected to expansion convolution to enhance the main feature region and connect the dispersed regions to expand the body region of the moving object, and the filtered and expanded dynamic object image is shown in fig. 4.
Step S3 identifies a dynamic object in the dynamic object image using the YOLO-small network. The detection network comprises three parts: a feature layer, a neck layer, and a YOLO layer. Objects detected by the DVS camera include cars, pedestrians, and bicycles.
Characteristic layer: since the DVS camera has an advantage of detecting only a moving object, there are only two kinds of pixels (background still pixels and object moving pixels). The feature extraction layer can be simplified. The RES structure is designed (see fig. 5) and used as a basic unit for extracting features from the DVS image.
A rock layer: and carrying out a single convolution operation on the deepest characteristic layer. And then combined with other feature layers by upsampling conduction. In this way, the problem of gradient disappearance caused by deepening of the feature layer can be alleviated. The junction network of the feature layer and the tack layer is shown in fig. 6.
A YOLO layer: DVS-based image detection has only one tag (dynamic object). The YOLO layer has three pre-verification boxes, each of which includes five parts: x offset, y offset, length, width, and confidence index. The dimension of the last output layer is 3 × 3 (5+1) ═ 3 × 6 ═ 18.
Loss function: the loss function includes six parts: the x-coordinate of the prediction center, the y-coordinate, the width, the height, the IOU error, and the prediction category.
Step S5 specifically includes:
s51, establishing a static environment point cloud error model, and performing error evaluation on the static environment point cloud by using the static environment point cloud error model;
and S52, determining the error weight of each point based on the static environment point cloud error evaluation result, and registering the static environment point cloud and the map features based on the error weight to obtain positioning information.
Due to the sparsity of the lidar points, the distribution deviation exists between the extracted static features and the actual features through the operation of subtracting dynamic objects through the identification of the lidar and the camera. The deviations are distributed in the width, height and depth directions as shown in fig. 7. The extracted features are assumed to have distributed error vectors. The error vector is pe ═ δ x, δ y, δ z ], δ x is related to the horizontal angular resolution θ and the scanning distance d, δ y is related to the vertical angular resolution Φ and the scanning distance d, δ z is related to the air medium h and the scanning distance d, δ x ═ θ d ═ λ 1, δ y ═ Φ d ═ λ 2, δ z ═ hd ═ λ 3.
Information entropy, defined as the probability of occurrence of discrete random events, is one of the most important theories in modern measurement, and therefore can be introduced into the evaluation of feature distribution uncertainty, and is also a representation of the distribution error size or volume, and the information entropy and the error entropy are represented as:
in the formula: p (X) represents the probability of an X event.
The actual distribution rule of the point characteristics can influence the calculation result of the information entropy, so that a probability mathematical expression of an error model needs to be accurately established. If the error distribution follows a uniform distribution, the error volume is in the shape of a cube. If the error distribution follows a Gaussian distribution, the error volume is elliptical. The stereoscopic error distribution of the uniform distribution and the gaussian distribution is shown in fig. 8, in which (a) of fig. 8 is the stereoscopic error distribution of the uniform distribution, and in which (b) of fig. 8 is the stereoscopic error distribution of the gaussian distribution. Fig. 9 is a schematic diagram of uniformly distributed and gaussian error models, and fig. 9 (a) is a schematic diagram of uniformly distributed error models, and fig. 9 (b) is a schematic diagram of gaussian error models. Fig. 10 is a distribution probability curve of a uniform distribution and a gaussian distribution, and fig. 10 (a) is a distribution probability curve of a uniform distribution, and fig. 10 (b) is a distribution probability curve of a gaussian distribution.
The uniformly distributed distribution function is:
the distribution function of the gaussian distribution is:
in the uniform distribution model, there is only one parameter, i.e., (x) to U (-0.5 λ,0.5 λ). In fact, all solid points are within the cube: [ -0.5 λ 1,0.5 λ 1], [ -0.5 λ 2,0.5 λ 2], and [ -0.5 λ 3,0.5 λ 3 ]. Since the total probability integral of the error cube is 1, equation (1-1) can be obtained. Then, equation (1-2) can be further simplified. From the definitions of the information entropy and the error entropy, the error entropy of the uniform distribution model can be calculated by equations (1-3). The detection precision can be found to be linearly changed along with the scanning distance through the calculation result of the error entropy.
Seven=eP=λ1λ2λ3d3=1.84*10-7*d3(1-3)
Wherein f (x, y, z) represents the probability of uniform distribution of spatial points, PevenEntropy of information, S, representing the mean probability distributionevenError entropy representing the mean probability distribution.
In the gaussian distribution model, there is only one parameter, i.e., (x) to N (0, λ 2). In fact, 97% of the lidar points fall within the range of the ellipsoid: [ - λ 1+ μ,. lamda.1 + μ ], [ - λ 2+ μ,. lamda.2 + μ ] and [ - λ 3+ μ,. lamda.3 + μ ].
where f (x, y, z) represents the probability of a Gaussian distribution of spatial points, PgaussianInformation entropy representing a gaussian probability distribution.
Equations (1-5) can be converted to equations (1-6) by the spherical integration and distributed integration method. The error entropy of the gaussian distribution can also be calculated using equations (1-7) according to the definitions of the information entropy and the error entropy.
Wherein, PgaussianEntropy of information, S, representing a Gaussian probability distributiongaussianError entropy representing a gaussian probability distribution.
Based on the above, the static environment point cloud error model includes a uniform distribution error model and a gaussian distribution error model, the uniform distribution error model is used for calculating a uniform distribution error of each point in the static environment point cloud, and the gaussian distribution error model is used for the gaussian distribution error of each point in the static environment point cloud.
In step S52, the error weight of each point in the static environment point cloud is:
wherein the content of the first and second substances,the error weight of the ith point in the static environment point cloud,the weight corresponding to the uniformly distributed error model of the ith point in the static environment point cloud,the weight corresponding to the i point Gaussian distribution error model in the static environment point cloud, PPi evenThe information entropy, P, of the ith point in the static environment point cloud is uniformly distributedi evenThe relative probability distribution information entropy of the ith point in the current static environment point cloud in the uniform distribution error model is obtained; PP (polypropylene)i gaussianIs the information entropy of the i point Gaussian distribution in the static environment point cloud, Pi gaussianThe relative probability distribution information entropy of the ith point of the current static environment point cloud in the Gaussian distribution error model is obtained.
Pi evenAnd Pi gaussianRespectively as follows:
Pi even=ln[λ1iλ2iλ3idi 3]
λ1i=θdi,λ2i=φdi,λ2i=hdi
wherein theta is the horizontal angular resolution of the laser radar, phi is the vertical angular resolution of the laser radar, h is the air dielectric constant, diThe scanning distance of the ith point in the static environment point cloud is obtained.
When the vehicle is autonomously traveling, there is a map feature set G and a current travel feature set P, which is recalculated into a feature set PT by rotating the shift matrix (R, H). The least squares distance error (Ematching) from PT to G was calculated according to the following equation:
in the formula, R represents a rotational posture change matrix, and H represents a translational change matrix.
If the Ematding value is less than the predefined threshold, the current feature is successfully matched with the map feature.
The invention integrates a laser radar and a dynamic vision sensor camera (DVS camera) to realize robust positioning of autonomous navigation of a vehicle. The Otsu method aims to filter scattered noise events on the dynamic event point cloud image and perform expansion operation to expand a dynamic area, so that the problem of dispersion of the DVS image is solved. The YOLO-small network aims at identifying, clustering and marking moving objects and obtaining the residual static point cloud of the laser radar by deducting the moving objects. And then evaluating and positioning the static environment point cloud, solving the information entropy and the error entropy of the model by establishing a laser radar point feature three-dimensional distribution error model, optimizing the matching weight of the features and improving the positioning precision.
The above embodiments are merely examples and do not limit the scope of the present invention. These embodiments may be implemented in other various manners, and various omissions, substitutions, and changes may be made without departing from the technical spirit of the present invention.
Claims (10)
1. A fusion positioning method based on multiple sensors in a dynamic scene is characterized by comprising the following steps:
s1, acquiring dynamic event point cloud acquired by a dynamic vision sensor camera and environment point cloud data acquired by a laser radar;
s2, processing the dynamic event point cloud and filtering noise events to obtain a dynamic object image;
s3, identifying the dynamic object in the dynamic object image, and selecting a dynamic object area;
s4, mapping the dynamic object area to the environment point cloud data, and removing the dynamic object point cloud to obtain a static environment point cloud;
and S5, registering the static environment point cloud and the map features to acquire positioning information.
2. The method for fusion positioning based on multiple sensors in a dynamic scene according to claim 1, wherein the step S2 specifically comprises:
s21, projecting the dynamic event point clouds at different sampling moments in a sampling time period to the same image plane to obtain a dynamic event point cloud image;
s22, filtering based on the event point cloud thickness of each pixel point in the dynamic event point cloud image, and filtering noise point cloud;
and S23, carrying out binarization processing on the dynamic event point cloud image with the noise point cloud filtered out to obtain a dynamic object image.
3. The multi-sensor-based fusion positioning method in the dynamic scene according to claim 2, wherein the step S22 specifically comprises:
s221, counting the number of event point clouds of each pixel point in the dynamic event point cloud image, and recording the number as the point cloud thickness n of the ith pixel pointiI is 0, 1, … …, and L-1, wherein L is the total number of pixel points;
s222, calculating the probability P that the number of event point clouds of the ith pixel point accounts for the whole event pointiAnd average event point cloud thickness μT:
n=n1+n2+…+nL-1
S223, sorting the point cloud thicknesses of the L pixel points from big to small so as toFinding a point cloud thickness threshold n for a target that distinguishes between noisy and dynamic object point cloudsTWherein, in the step (A),
j represents the j-th pixel point ordered from big to small according to the point cloud thickness, w0(nT) Representing the probability, w, of a dynamic object region1(nT) Representing the probability of a noisy region, mu0(nT) Mean point cloud thickness, μ, representing a dynamic object region1(nT) The average point cloud thickness representing the noise region,point cloud thickness variance of the dynamic object area and the noise area;
s224, enabling the point cloud thickness in the dynamic event point cloud image to be smaller than nTThe points of (2) are taken as noise points, and the noise points are filtered.
4. The method for fusion positioning based on multiple sensors in a dynamic scene according to claim 3, wherein the step S23 is implemented by performing binarization processing according to the following formula to obtain the dynamic object image:
wherein f (x, y) represents the gray value of the pixel point with the coordinate (x, y), and n (x, y) represents the point cloud thickness of the pixel point with the coordinate (x, y).
5. The fusion positioning method based on multiple sensors in a dynamic scene according to claim 4, wherein step S23 further performs dilation convolution processing after the dynamic object image is obtained, and the gray value of each pixel point is replaced by the minimum gray value in the convolution kernel during dilation convolution processing.
6. The method for fusion positioning based on multiple sensors in a dynamic scene according to claim 1, wherein step S3 employs a YOLO-small network to identify the dynamic object in the dynamic object image.
7. The method for fusion positioning based on multiple sensors in a dynamic scene according to claim 1, wherein the step S5 specifically comprises:
s51, establishing a static environment point cloud error model, and performing error evaluation on the static environment point cloud by using the static environment point cloud error model;
and S52, determining the error weight of each point based on the static environment point cloud error evaluation result, and registering the static environment point cloud and the map features based on the error weight to obtain positioning information.
8. The method as claimed in claim 7, wherein the static environment point cloud error model includes a uniformly distributed error model and a gaussian distributed error model, the uniformly distributed error model is used for calculating uniformly distributed errors of each point in the static environment point cloud, and the gaussian distributed error model is used for the gaussian distributed errors of each point in the static environment point cloud.
9. The method for fusion positioning based on multiple sensors in dynamic scene according to claim 8, wherein the error weight of each point in the static environment point cloud in step S52 is:
wherein the content of the first and second substances,the error weight of the ith point in the static environment point cloud,the weight corresponding to the uniformly distributed error model of the ith point in the static environment point cloud,the weight corresponding to the i point Gaussian distribution error model in the static environment point cloud, PPi evenThe information entropy, P, of the ith point in the static environment point cloud is uniformly distributedi evenThe relative probability distribution information entropy of the ith point in the current static environment point cloud in the uniform distribution error model is obtained; PP (polypropylene)i gaussianIs the information entropy of the i point Gaussian distribution in the static environment point cloud, Pi gaussianThe relative probability distribution information entropy of the ith point of the current static environment point cloud in the Gaussian distribution error model is obtained.
10. The multi-sensor based fusion positioning method in dynamic scene according to claim 9, wherein P is Pi evenAnd Pi gaussianRespectively as follows:
Pi even=ln[λ1iλ2iλ3idi 3]
λ1i=θdi,λ2i=φdi,λ2i=hdi
wherein, theta is the horizontal angle resolution of the laser radar, phi is the vertical angle of the laser radarResolution, h is the air dielectric constant, diThe scanning distance of the ith point in the static environment point cloud is obtained.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111253666.4A CN114049542A (en) | 2021-10-27 | 2021-10-27 | Fusion positioning method based on multiple sensors in dynamic scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111253666.4A CN114049542A (en) | 2021-10-27 | 2021-10-27 | Fusion positioning method based on multiple sensors in dynamic scene |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114049542A true CN114049542A (en) | 2022-02-15 |
Family
ID=80205982
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111253666.4A Pending CN114049542A (en) | 2021-10-27 | 2021-10-27 | Fusion positioning method based on multiple sensors in dynamic scene |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114049542A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024066980A1 (en) * | 2022-09-26 | 2024-04-04 | 华为云计算技术有限公司 | Relocalization method and apparatus |
-
2021
- 2021-10-27 CN CN202111253666.4A patent/CN114049542A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024066980A1 (en) * | 2022-09-26 | 2024-04-04 | 华为云计算技术有限公司 | Relocalization method and apparatus |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110988912B (en) | Road target and distance detection method, system and device for automatic driving vehicle | |
CN108152831B (en) | Laser radar obstacle identification method and system | |
CN111862672B (en) | Parking lot vehicle self-positioning and map construction method based on top view | |
US10552982B2 (en) | Method for automatically establishing extrinsic parameters of a camera of a vehicle | |
US10424081B2 (en) | Method and apparatus for calibrating a camera system of a motor vehicle | |
CN112101092A (en) | Automatic driving environment sensing method and system | |
CN110942449A (en) | Vehicle detection method based on laser and vision fusion | |
US20080253606A1 (en) | Plane Detector and Detecting Method | |
CN112154454A (en) | Target object detection method, system, device and storage medium | |
CN110969064B (en) | Image detection method and device based on monocular vision and storage equipment | |
CN108645375B (en) | Rapid vehicle distance measurement optimization method for vehicle-mounted binocular system | |
CN110197173B (en) | Road edge detection method based on binocular vision | |
CN110674705A (en) | Small-sized obstacle detection method and device based on multi-line laser radar | |
CN112740225B (en) | Method and device for determining road surface elements | |
CN115113206B (en) | Pedestrian and obstacle detection method for assisting driving of underground rail car | |
CN114495064A (en) | Monocular depth estimation-based vehicle surrounding obstacle early warning method | |
CN114325634A (en) | Method for extracting passable area in high-robustness field environment based on laser radar | |
CN114120075A (en) | Three-dimensional target detection method integrating monocular camera and laser radar | |
Lion et al. | Smart speed bump detection and estimation with kinect | |
CN115923839A (en) | Vehicle path planning method | |
CN114049542A (en) | Fusion positioning method based on multiple sensors in dynamic scene | |
CN112733678A (en) | Ranging method, ranging device, computer equipment and storage medium | |
CN116978009A (en) | Dynamic object filtering method based on 4D millimeter wave radar | |
WO2021063756A1 (en) | Improved trajectory estimation based on ground truth | |
Goyat et al. | Tracking of vehicle trajectory by combining a camera and a laser rangefinder |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |