CN113689502A - Multi-information fusion obstacle measuring method - Google Patents

Multi-information fusion obstacle measuring method Download PDF

Info

Publication number
CN113689502A
CN113689502A CN202111020484.2A CN202111020484A CN113689502A CN 113689502 A CN113689502 A CN 113689502A CN 202111020484 A CN202111020484 A CN 202111020484A CN 113689502 A CN113689502 A CN 113689502A
Authority
CN
China
Prior art keywords
formula
obstacle
camera
layer
laser radar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111020484.2A
Other languages
Chinese (zh)
Other versions
CN113689502B (en
Inventor
刘卿卿
马信源
邱东
孙锦程
明梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN202111020484.2A priority Critical patent/CN113689502B/en
Publication of CN113689502A publication Critical patent/CN113689502A/en
Application granted granted Critical
Publication of CN113689502B publication Critical patent/CN113689502B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/207Analysis of motion for motion estimation over a hierarchy of resolutions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Remote Sensing (AREA)
  • Probability & Statistics with Applications (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of measurement, and discloses a multi-information fusion obstacle measurement method, which comprises the following steps: s1, transforming the laser radar coordinate to the coordinate system of the camera through coordinate transformation, and then obtaining surrounding obstacle information through the depth camera and the laser radar sensor; s2, training the data of the measured obstacle information by using an RBF neural network; and S3, making a fusion rule, and estimating and updating the motion state of the unmanned vehicle by combining Bayes estimation updating. The invention utilizes the laser radar and the depth camera sensor, measures the distance of the obstacle through corresponding coordinate transformation, trains the data obtained by the sensor through selecting a proper transfer function and a learning algorithm and utilizing the RBF neural network, estimates and updates the motion state of the unmanned vehicle by formulating a corresponding fusion rule and combining Bayesian estimation and update, realizes the accurate measurement of the obstacle, and has the characteristics of high accuracy and high speed.

Description

Multi-information fusion obstacle measuring method
Technical Field
The invention relates to the technical field of measurement, in particular to a multi-information fusion obstacle measurement method.
Background
In recent years, intelligent technologies such as unmanned vehicles and robots have been the focus of research. Road obstacle measurement is an important branch of the method, because it determines the decision and precision of tasks such as feasible region extraction, path planning, target identification and the like which are completed by intelligent bodies such as unmanned vehicles, robots and the like, and for small and medium-sized unmanned vehicles and robots, the tasks which greatly influence the small and medium-sized unmanned vehicles and robots are not pedestrians, road signs, lane lines and the like but small obstacles on a traveling road, such as stones, road surface garbage and the like. In order to ensure that an intelligent body can sense the environment quantitatively, measurement of parameters such as the maximum width of an obstacle, the distance between the intelligent body and the obstacle, the azimuth angle and the like is a basic and important task, and the application range and depth of small and medium-sized unmanned vehicles and robots are determined.
The lidar is a high-precision sensor commonly used in the related field, but the lidar is relatively expensive, has sparse time cloud and low vertical density in remote measurement, particularly has poor sensitivity to obstacles below 16 lines, is more prominent in measuring small objects, and is a difficulty in the application field of the existing lidar. The influence of external factors such as illumination, shadow and the like on the measurement precision is considered in the road obstacle measurement based on the visual information, an auxiliary algorithm is added for reducing the influence, the complexity of the algorithm is generally improved, and in addition, the precision of the measurement method based on the visual information is relatively low.
Disclosure of Invention
In order to solve the above mentioned drawbacks in the background art, the present invention provides a method for measuring an obstacle with multi-information fusion, which utilizes a laser radar and a depth camera sensor, and performs corresponding coordinate transformation to train data obtained by the sensor by using an RBF neural network, and combines the training result with bayesian estimation by using a fusion rule to obtain data of the obstacle.
The purpose of the invention can be realized by the following technical scheme:
a multi-information fusion obstacle measuring method comprises the following steps:
s1, transforming the laser radar coordinate to the coordinate system of the camera through coordinate transformation, and then obtaining surrounding obstacle information through the depth camera and the laser radar sensor;
s2, training the data of the measured obstacle information by using an RBF neural network;
and S3, making a fusion rule, and estimating and updating the motion state of the unmanned vehicle by combining Bayes estimation updating.
Further, in step S1, a spatial coordinate system is first established on the robot itself, and the data of the laser radar is projected under the pixel coordinate system through the transformation of the coordinate system;
p point coordinate (X) under camera coordinate systemc,Yc,Zc) And P point coordinates (X) in radar coordinate systemr,Yr) The relationship of (a) to (b) is as follows:
Figure BDA0003241751990000021
Figure BDA0003241751990000022
in the formulae (1) and (2), HcHeight of the center of the camera from the ground, HrThe height from the center of the laser radar sensor to the ground is taken as L, and the distance difference between the center of the camera and the center of the laser radar sensor in the transverse direction is taken as L;
obtaining the coordinates of the point P in the image coordinate system through the formulas (1) and (2):
Figure BDA0003241751990000023
the conversion between the image and the pixel coordinates can be realized by f being the focal length of the camera in the formula (3), and the position (u ', v') of the point P on the photo is obtained:
Figure BDA0003241751990000031
in the formula (4), u0,v0Respectively representing the coordinates of the origin of the coordinate system of the image. dx and dy represent the displacement of the pixel point in both the u and v planes.
Further, in step S2, the RBF neural network includes an input layer, a hidden layer and an output layer, the transformation from the output layer to the hidden layer is non-linear, the output layer trains and learns the data through the radial basis function, the transformation from the hidden layer to the output layer is linear transformation, and the output of the network is a linear weighted sum of the hidden unit outputs.
Further, the input layer comprises data of a laser radar and a depth camera, the radial base layer comprises the center of a basic function, the obtained result is sent to the linear layer through the radial basic function, and a predicted value of the data is obtained through linear transformation;
the RBF neural network uses a radial basis function method, and the activation function of the radial basis neural network can be expressed as:
Figure BDA0003241751990000032
in the formula (5), xpRepresenting an input vector, ciIs the center of the Gaussian function, and sigma is the variance of the Gaussian function;
xpthe structure of the radial basis function neural network can be derived as the network output:
Figure BDA0003241751990000033
in the formula (6), wijRepresenting hidden layers toA connection weight of the output layer and j ═ 1, 2.., n;
loss function representation using least squares:
Figure BDA0003241751990000034
in the formula (7), d is an expected value, and σ is a variance of a Gaussian function; j is 1, 2,. said, m; m is the number of input vectors;
further, the output layer is completed by two steps by using a learning method of self-organizing selecting centers, the first step is an unsupervised learning process, the variance of the basis function is calculated, the second step is a supervised learning process, the weight from the radial basic layer to the linear layer is calculated, and the specific algorithm is as follows:
the first step, h centers are selected to perform k-means clustering, and for the radial basis of the Gaussian kernel function, the variance is solved by a formula:
Figure BDA0003241751990000041
in the formula (8), cmaxThe maximum distance between the selected central points is taken;
secondly, calculating the weight from the radial base layer to the linear layer, and simplifying to obtain a formula as follows:
Figure BDA0003241751990000042
in formula (9), P is 1, 2., P; 1, 2,. h; p is the number of input vectors;
further, in step S3, the fusion model of the lidar detection area and the depth camera detection area is:
Figure BDA0003241751990000043
in the formula (10), a ZonelasterZone for lidar detected areascameraFor regions detected by the depth camera, glaster(i, j) and gcamera(i, j) represent the matrices of the two corresponding regions, q (g), respectivelylaster(i,j),gcamera(i, j)) represents a fusion rule;
and (3) updating the Bayesian estimation to obtain the posterior probability density of the system according to the following steps:
first using the prior probability density p (c) at time t-1t-1|z1:t-1) Calculating p (x)t|z1:t-1) Let us assume xtOnly sum of xt-1Related and p (x)t-1|z1:t-1) Knowing, we have:
p(xt|z1:t-1)=∫p(xct|xt-1)p(xt-1|z1:t-1)dxt-1 (11),
then the observed value z at time ttP (x)t|z1:t-1) Correcting to obtain the posterior probability density p (x) of the systemt|z1:t):
Figure BDA0003241751990000044
ztCan be obtained as independent values:
Figure BDA0003241751990000051
transforming the above formula by a joint distribution probability formula and a conditional probability formula:
Figure BDA0003241751990000052
and (3) setting each observation value to be independent to obtain the final posterior probability:
Figure BDA0003241751990000053
in formulae (11) to (15), p (z)t|xt) Likelihood probability, p (x), for system observation equationt|x1:t-1) As a prior probability, p (z)t|z1:t-1) Constant for final normalization of equation;
and fusing the Bayesian estimation and the previous region fusion rule to obtain fused environmental obstacle data.
The invention has the beneficial effects that:
the invention utilizes a laser radar and a depth camera sensor, measures the distance of the obstacle through corresponding coordinate transformation, trains the data obtained by the sensor by selecting a proper transfer function and a learning algorithm and utilizing a RBF neural network, and has the result of simulation verification that the distance measurement error of the result obtained by the method is lower than 0.1 percent and the speed is improved by 24 percent compared with the traditional bp algorithm. The motion state of the unmanned vehicle is estimated and updated by formulating a corresponding fusion rule and combining Bayesian estimation and updating, so that the obstacle can be accurately measured, the method has the characteristics of high accuracy and high speed, and the unmanned vehicle has great advantages in the aspect of detecting small obstacles.
Drawings
The invention will be further described with reference to the accompanying drawings.
FIG. 1 is a flow chart of obstacle information detection according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a coordinate system according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an RBF neural network according to an embodiment of the present invention;
FIG. 4 is a diagram of data fusion rules according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The flow chart of the measuring method is shown in figure 1, firstly, the laser radar coordinate is transformed to the coordinate system of the camera through the transformation of the coordinate system, then, the measured data is trained by using an RBF neural network and matching with a certain learning algorithm, finally, the trained predicted value is obtained, then, the predicted value is put into a fusion rule, and the surrounding obstacle information can be obtained through the combination with Bayesian estimation.
1. And transforming the laser radar coordinate to the coordinate system of the camera through coordinate transformation, and acquiring the information of surrounding obstacles through the depth camera and the laser radar sensor.
O in FIG. 1rRepresented by the center of the lidar sensor, OCThe center of the camera is represented, and the two sensors are installed at fixed positions, have certain heights relative to the ground, and have a height difference HC-HrThe difference in distance in the transverse direction is L, since the lidar scanning plane is parallel to the cross section of the camera, i.e. XrOrYrAnd XcOcYCParallel. We can get the coordinates of the P point (X) in the camera coordinate systemC,YC,ZC) And P point coordinates (X) in a radar coordinate systemr,Yr) The relationship of (1):
Figure BDA0003241751990000061
Figure BDA0003241751990000062
by the above formula, we can obtain the coordinates of the point P in the image coordinate system:
Figure BDA0003241751990000071
and then, converting the image and the pixel coordinates by the following formula to obtain the position (u ', v') of the point P on the photo:
Figure BDA0003241751990000072
according to the steps, the laser radar coordinate system point P (X) is realizedr,Yr) The transformation to the pixel coordinate system P (u ', v') contains only some internal parameters of the camera, which can be determined by the Zhang friend method, thereby completing the transformation of coordinates.
2. And training the measured data by using the RBF neural network.
In fig. 2, the input layer contains data of a laser radar and a depth camera, the radial base layer contains the center of a basis function, the obtained result is sent to the linear layer through the radial basis function, and a predicted value of the data is obtained through linear transformation.
The RBF neural network uses a Radial Basis Function (RBF) method, and the activation function of the RBF neural network can be expressed as:
Figure BDA0003241751990000073
in the above formula, xpRepresenting an input vector, ciσ is the variance of the gaussian function, centered on the gaussian function. Wherein xpThe structure of the radial basis function neural network can be derived as the network output:
Figure BDA0003241751990000074
in the above formula, wijDenotes the connection weight of the hidden layer to the output layer and j is 1, 2. Finally, the loss function of least squares is adopted to represent:
Figure BDA0003241751990000075
in the above formula, d is an expected value, and σ is a variance of a gaussian function, which controls a radial acting range of the function, and adjusts the sensitivity of neurons, so that the computing capability of the RBF neural network is greatly improved.
Therefore, the selection of the center of the radial basis greatly influences the final predicted value, the center of the first radial basis is assumed to be data if 40 groups of data exist, the analogy is repeated, the calculation result of the radial basis network can be obtained by utilizing the activation function of the radial basis neural network, and then the variance and the weight from the radial basis layer to the linear layer are obtained.
The method is mainly completed in two steps, namely, in the first step, the unsupervised learning process is carried out, and the variance of a basis function is calculated; and the second step is a process of supervised learning, and the weight from the radial basic layer to the linear layer is calculated.
The specific algorithm is that h centers are selected for k-means clustering in the first step, and for the radial basis of the Gaussian kernel function, the variance is solved by a formula:
Figure BDA0003241751990000081
in the above formula, cmaxThe maximum distance between the selected center points.
And secondly, calculating the weight from the radial base layer to the linear layer, and simplifying to obtain a formula as follows:
Figure BDA0003241751990000082
in the above formula, P is 1, 2.., P; i 1, 2, h, P is the number of input vectors.
3. And making a corresponding fusion rule, and estimating and updating the motion state of the unmanned vehicle by combining Bayesian estimation updating.
Let us assume that the Zone detected by the lidar is ZonelasterThe Zone detected by the depth camera is Zonecamera,glaster(i, j) and gcamera(i, j) represent the matrix of these two corresponding areas separately, store the information of the obstacle of two areas, the fusion model between them is:
Figure BDA0003241751990000091
wherein q (g)laster(i,j),gcamera(i, j)) represents a fusion rule, the specific rule is shown in FIG. 3, d in FIG. 3lExpressed is the distance measured by the lidar, dcThe measured distance of the depth camera is shown, and the probability density of the initial state of the robot is assumed to be p (x)0|z0)=p(x0),x0Is the state of the system at the initial time, z0For the observed value at the initial time, the system posterior probability density can be obtained by using the prior probability density p (x) at the t-1 timet-1|z1:t-1) Calculating p (x)t|z1:t-1) Let us assume xtOnly sum of xt-1Related and p (x)t-1|z1:t-1) Knowing, we have:
p(xt|z1:t-1)=∫p(xt|xt-1)p(xt-1|z1:t-1)dxt-1
then the observed value z at time ttP (x)t|z1:t-1) Correcting to obtain the posterior probability density p (x) of the systemt|z1:t)
Figure BDA0003241751990000092
ztCan be obtained as independent values:
Figure BDA0003241751990000093
transforming the above formula by a joint distribution probability formula and a conditional probability formula
Figure BDA0003241751990000094
And (3) setting each observation value to be independent to obtain the final posterior probability:
Figure BDA0003241751990000095
in the formula, p (z)t|xt) Likelihood probability, p (x), for system observation equationt|z1:t-1) As a prior probability, p (z)t|z1:t-1) Is a constant for the final normalization of the equation. And then fusing the Bayesian estimation and the previous region fusion rule to obtain fused environmental obstacle data.
In the description herein, references to the description of "one embodiment," "an example," "a specific example" or the like are intended to mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The foregoing shows and describes the general principles, essential features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed.

Claims (6)

1. A multi-information fusion obstacle measuring method is characterized by comprising the following steps:
s1, transforming the laser radar coordinate to the coordinate system of the camera through coordinate transformation, and then obtaining surrounding obstacle information through the depth camera and the laser radar sensor;
s2, training the data of the measured obstacle information by using an RBF neural network;
and S3, making a fusion rule, and estimating and updating the motion state of the unmanned vehicle by combining Bayes estimation updating.
2. The method for measuring the obstacle with multi-information fusion according to claim 1, wherein in step S1, a spatial coordinate system is first established on the robot, and the data of the lidar is projected under a pixel coordinate system through the transformation of the coordinate system;
p point coordinate (X) under camera coordinate systemc,Yc,Zc) And P point coordinates (X) in a radar coordinate systemr,Yr) The relationship of (a) to (b) is as follows:
Figure FDA0003241751980000011
Figure FDA0003241751980000012
in the formulae (1) and (2), HcHeight of the center of the camera from the ground, HrThe height from the center of the laser radar sensor to the ground is taken as L, and the distance difference between the center of the camera and the center of the laser radar sensor in the transverse direction is taken as L;
obtaining the coordinates of the point P in the image coordinate system through the formulas (1) and (2):
Figure FDA0003241751980000013
the conversion between the image and the pixel coordinate can be realized by f in the formula (3) as the focal length of the camera, and the position (u ', v') of the point P on the photo is obtained:
Figure FDA0003241751980000021
in the formula (4), u0,v0Respectively representing the coordinates of the origin of the coordinate system of the image. dxAnd dyRepresenting the displacement of the pixel point on two planes of the u and v axes.
3. The method for measuring the obstacle with multi-information fusion of claim 1, wherein the RBF neural network in step S2 includes an input layer, a hidden layer and an output layer, the transformation from the output layer to the hidden layer is non-linear, the output layer trains and learns the data through the radial basis function, the transformation from the hidden layer to the output layer is a linear transformation, and the output of the network is a linear weighted sum of the hidden unit outputs.
4. The method according to claim 3, wherein the input layer comprises data of a laser radar and a depth camera, the radial base layer comprises a center of a basis function, the obtained result is sent to the linear layer through the radial basis function, and a predicted value of the data is obtained through linear transformation;
the RBF neural network uses a radial basis function method, and the activation function of the radial basis neural network can be expressed as:
Figure FDA0003241751980000022
in the formula (5), xpRepresenting an input vector, ciIs the center of the Gaussian function, and delta is the variance of the Gaussian function;
xpthe structure of the radial basis function neural network can be derived as the network output:
Figure FDA0003241751980000023
in the formula (6), the reaction mixture is,wijrepresents the connection weight of the hidden layer to the output layer, and j is 1, 2.
Loss function representation using least squares:
Figure FDA0003241751980000024
in formula (7), d is an expected value, σ is a variance of a gaussian function, and j is 1, 2. m is the number of input vectors.
5. The method for measuring the obstacle with multi-information fusion according to claim 3, wherein the output layer is completed in two steps by using a learning method of self-organizing and selecting centers, the unsupervised learning process of the first step is used for calculating the variance of the basis function, the supervised learning process of the second step is used for calculating the weight from the radial basic layer to the linear layer, and the specific algorithm is as follows:
the first step, h centers are selected to perform k-means clustering, and for the radial basis of the Gaussian kernel function, the variance is solved by a formula:
Figure FDA0003241751980000031
in the formula (8), cmaxThe maximum distance between the selected central points is taken;
secondly, calculating the weight from the radial base layer to the linear layer, and simplifying to obtain a formula as follows:
Figure FDA0003241751980000032
in formula (9), p is 1, 2.
6. The multi-information-fused obstacle measuring method according to claim 1, wherein the laser radar detection area and depth camera detection area fusion model in step S3 is:
Figure FDA0003241751980000033
in the formula (10), a ZonelasterZone for lidar detected areascameraFor regions detected by the depth camera, glaster(i, j) and gcamera(i, j) respectively represent matrices of the two corresponding regions,
q(glaster(i,j),gcamera(i, j)) represents a fusion rule;
and (3) updating the Bayesian estimation to obtain the posterior probability density of the system according to the following steps:
firstly, the prior probability density p (x) at the t-1 moment is utilizedt-1|z1:t-1) Calculating p (x)t|z1:t-1),xtOnly sum of xt-1Related and p (x)t-1|z1:t-1) Knowing, we have:
p(xt|z1:t-1)=∫p(xt|xt-1)p(xt-1|z1:t-1)dxt-1 (11),
then the observed value z at time ttP (x)t|z1:t-1) Correcting to obtain the posterior probability density p (x) of the systemt|z1:t):
Figure FDA0003241751980000041
ztCan be obtained as independent values:
Figure FDA0003241751980000042
transforming the above formula by a joint distribution probability formula and a conditional probability formula:
Figure FDA0003241751980000043
each observation is independent, and the final posterior probability is obtained:
Figure FDA0003241751980000044
in formulae (11) to (15), p (z)t|xt) Likelihood probability, p (x), for system observation equationt|x1:t-1) As a prior probability, p (z)t|z1:t-1) Constant for final normalization of equation;
and fusing the Bayesian estimation and the previous region fusion rule to obtain fused environmental obstacle data.
CN202111020484.2A 2021-09-01 2021-09-01 Multi-information fusion obstacle measurement method Active CN113689502B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111020484.2A CN113689502B (en) 2021-09-01 2021-09-01 Multi-information fusion obstacle measurement method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111020484.2A CN113689502B (en) 2021-09-01 2021-09-01 Multi-information fusion obstacle measurement method

Publications (2)

Publication Number Publication Date
CN113689502A true CN113689502A (en) 2021-11-23
CN113689502B CN113689502B (en) 2023-06-30

Family

ID=78584688

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111020484.2A Active CN113689502B (en) 2021-09-01 2021-09-01 Multi-information fusion obstacle measurement method

Country Status (1)

Country Link
CN (1) CN113689502B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115600158A (en) * 2022-12-08 2023-01-13 奥特贝睿(天津)科技有限公司(Cn) Unmanned vehicle multi-sensor fusion method
CN116358561A (en) * 2023-05-31 2023-06-30 自然资源部第一海洋研究所 Unmanned ship obstacle scene reconstruction method based on Bayesian multi-source data fusion

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102141776A (en) * 2011-04-26 2011-08-03 江苏科技大学 Particle filter and RBF identification-based neural network PID control parameter self-setting method
CN108710376A (en) * 2018-06-15 2018-10-26 哈尔滨工业大学 The mobile chassis of SLAM and avoidance based on Multi-sensor Fusion
US20190324438A1 (en) * 2017-08-02 2019-10-24 Strong Force Iot Portfolio 2016, Llc Data collection systems having a self-sufficient data acquisition box
CN110910498A (en) * 2019-11-21 2020-03-24 大连理工大学 Method for constructing grid map by using laser radar and binocular camera
CN111445170A (en) * 2020-04-30 2020-07-24 天津大学 Autonomous operation system and method for unmanned rolling machine group
CN111680726A (en) * 2020-05-28 2020-09-18 国网上海市电力公司 Transformer fault diagnosis method and system based on neighbor component analysis and k neighbor learning fusion
CN111708368A (en) * 2020-07-07 2020-09-25 上海工程技术大学 Intelligent wheelchair based on fusion of laser and visual SLAM
CN112025729A (en) * 2020-08-31 2020-12-04 杭州电子科技大学 Multifunctional intelligent medical service robot system based on ROS
US20200402237A1 (en) * 2017-10-13 2020-12-24 Beijing Keya Medical Technology Co., Ld. Interactive clinical diagnosis report system
US10928830B1 (en) * 2019-11-23 2021-02-23 Ha Q Tran Smart vehicle
CN112525202A (en) * 2020-12-21 2021-03-19 北京工商大学 SLAM positioning and navigation method and system based on multi-sensor fusion
CN112868225A (en) * 2017-07-27 2021-05-28 阿里·埃布拉希米·阿夫鲁兹 Method and apparatus for combining data to construct a floor plan
CN113011351A (en) * 2021-03-24 2021-06-22 华南理工大学 Working method of intelligent shopping cart and intelligent shopping cart
CN113096183A (en) * 2021-03-18 2021-07-09 武汉科技大学 Obstacle detection and measurement method based on laser radar and monocular camera
CN113110451A (en) * 2021-04-14 2021-07-13 浙江工业大学 Mobile robot obstacle avoidance method with depth camera and single line laser radar fused

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102141776A (en) * 2011-04-26 2011-08-03 江苏科技大学 Particle filter and RBF identification-based neural network PID control parameter self-setting method
CN112868225A (en) * 2017-07-27 2021-05-28 阿里·埃布拉希米·阿夫鲁兹 Method and apparatus for combining data to construct a floor plan
US20190324438A1 (en) * 2017-08-02 2019-10-24 Strong Force Iot Portfolio 2016, Llc Data collection systems having a self-sufficient data acquisition box
US20200402237A1 (en) * 2017-10-13 2020-12-24 Beijing Keya Medical Technology Co., Ld. Interactive clinical diagnosis report system
CN108710376A (en) * 2018-06-15 2018-10-26 哈尔滨工业大学 The mobile chassis of SLAM and avoidance based on Multi-sensor Fusion
CN110910498A (en) * 2019-11-21 2020-03-24 大连理工大学 Method for constructing grid map by using laser radar and binocular camera
US10928830B1 (en) * 2019-11-23 2021-02-23 Ha Q Tran Smart vehicle
CN111445170A (en) * 2020-04-30 2020-07-24 天津大学 Autonomous operation system and method for unmanned rolling machine group
CN111680726A (en) * 2020-05-28 2020-09-18 国网上海市电力公司 Transformer fault diagnosis method and system based on neighbor component analysis and k neighbor learning fusion
CN111708368A (en) * 2020-07-07 2020-09-25 上海工程技术大学 Intelligent wheelchair based on fusion of laser and visual SLAM
CN112025729A (en) * 2020-08-31 2020-12-04 杭州电子科技大学 Multifunctional intelligent medical service robot system based on ROS
CN112525202A (en) * 2020-12-21 2021-03-19 北京工商大学 SLAM positioning and navigation method and system based on multi-sensor fusion
CN113096183A (en) * 2021-03-18 2021-07-09 武汉科技大学 Obstacle detection and measurement method based on laser radar and monocular camera
CN113011351A (en) * 2021-03-24 2021-06-22 华南理工大学 Working method of intelligent shopping cart and intelligent shopping cart
CN113110451A (en) * 2021-04-14 2021-07-13 浙江工业大学 Mobile robot obstacle avoidance method with depth camera and single line laser radar fused

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
RENQIANG WANG 等: "Optimized Radial Basis Function Neural Network Based Intelligent Control Algorithm of Unmanned Surface Vehicles", 《JOURNAL OF MARINE SCIENCE AND ENGINEERING》, pages 1 - 13 *
WEI HUANG 等: "Information Fusion of Ultrasonic Sensor Based on RBF Network in Obstacle-Avoidance System of Mobile Robot", 《APPLIED MECHANICS AND MATERIALS》, pages 791 - 795 *
刘卿卿 等: "多传感器数据融合的障碍物测量方法", 《单片机与嵌入式系统应用》, pages 55 - 59 *
刘康: "基于视觉的机器人姿态测量", 《中国优秀硕士学位论文全文数据库 信息科技辑》, pages 140 - 198 *
张国强: "基于激光雷达的鲁棒性地图构建与小型障碍物测量算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, pages 136 - 457 *
程熙: "基于深度图像的室内移动机器人行人跟随", 《中国优秀硕士学位论文全文数据库 信息科技辑》, pages 138 - 2857 *
马信源: "基于多传感器信息融合的轮式机器人室内避障的研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, pages 140 - 872 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115600158A (en) * 2022-12-08 2023-01-13 奥特贝睿(天津)科技有限公司(Cn) Unmanned vehicle multi-sensor fusion method
CN115600158B (en) * 2022-12-08 2023-04-18 奥特贝睿(天津)科技有限公司 Unmanned vehicle multi-sensor fusion method
CN116358561A (en) * 2023-05-31 2023-06-30 自然资源部第一海洋研究所 Unmanned ship obstacle scene reconstruction method based on Bayesian multi-source data fusion
CN116358561B (en) * 2023-05-31 2023-08-15 自然资源部第一海洋研究所 Unmanned ship obstacle scene reconstruction method based on Bayesian multi-source data fusion

Also Published As

Publication number Publication date
CN113689502B (en) 2023-06-30

Similar Documents

Publication Publication Date Title
CN110645974B (en) Mobile robot indoor map construction method fusing multiple sensors
CN111798475B (en) Indoor environment 3D semantic map construction method based on point cloud deep learning
CN107065890B (en) Intelligent obstacle avoidance method and system for unmanned vehicle
CN107741745B (en) A method of realizing mobile robot autonomous positioning and map structuring
CN109597864B (en) Method and system for real-time positioning and map construction of ellipsoid boundary Kalman filtering
CN113269098A (en) Multi-target tracking positioning and motion state estimation method based on unmanned aerial vehicle
Engel et al. Deeplocalization: Landmark-based self-localization with deep neural networks
JP2019527832A (en) System and method for accurate localization and mapping
CN113689502B (en) Multi-information fusion obstacle measurement method
Thormann et al. Extended target tracking using Gaussian processes with high-resolution automotive radar
CN110263607B (en) Road-level global environment map generation method for unmanned driving
CN114998276B (en) Robot dynamic obstacle real-time detection method based on three-dimensional point cloud
Chen et al. Robot navigation with map-based deep reinforcement learning
CN112731371B (en) Laser radar and vision fusion integrated target tracking system and method
Hata et al. Monte Carlo localization on Gaussian process occupancy maps for urban environments
CN114325634A (en) Method for extracting passable area in high-robustness field environment based on laser radar
CN112629520A (en) Robot navigation and positioning method, system, equipment and storage medium
Gwak et al. A review of intelligent self-driving vehicle software research
CN111880191A (en) Map generation method based on multi-agent laser radar and visual information fusion
CN116758153A (en) Multi-factor graph-based back-end optimization method for accurate pose acquisition of robot
CN115950414A (en) Adaptive multi-fusion SLAM method for different sensor data
CN115471526A (en) Automatic driving target detection and tracking method based on multi-source heterogeneous information fusion
Laible et al. Building local terrain maps using spatio-temporal classification for semantic robot localization
CN103345762A (en) Bayes visual tracking method based on manifold learning
JP2021026683A (en) Distance estimation apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant