CN113689502B - Multi-information fusion obstacle measurement method - Google Patents

Multi-information fusion obstacle measurement method Download PDF

Info

Publication number
CN113689502B
CN113689502B CN202111020484.2A CN202111020484A CN113689502B CN 113689502 B CN113689502 B CN 113689502B CN 202111020484 A CN202111020484 A CN 202111020484A CN 113689502 B CN113689502 B CN 113689502B
Authority
CN
China
Prior art keywords
formula
camera
laser radar
obstacle
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111020484.2A
Other languages
Chinese (zh)
Other versions
CN113689502A (en
Inventor
刘卿卿
马信源
邱东
孙锦程
明梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN202111020484.2A priority Critical patent/CN113689502B/en
Publication of CN113689502A publication Critical patent/CN113689502A/en
Application granted granted Critical
Publication of CN113689502B publication Critical patent/CN113689502B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/207Analysis of motion for motion estimation over a hierarchy of resolutions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention relates to the technical field of measurement, and discloses a multi-information fusion obstacle measurement method, which comprises the following steps: s1, transforming laser radar coordinates into a coordinate system of a camera through coordinate transformation, and acquiring surrounding obstacle information through a depth camera and a laser radar sensor; s2, training the data of the measured obstacle information by utilizing an RBF neural network; and S3, formulating a fusion rule, and estimating and updating the motion state of the unmanned vehicle by combining Bayesian estimation updating. The invention utilizes the laser radar and the depth camera sensor to measure the distance of the obstacle through corresponding coordinate transformation, trains the data obtained by the sensor by selecting proper transfer function and learning algorithm and utilizing RBF neural network, estimates and updates the motion state of the unmanned vehicle by formulating corresponding fusion rules and combining Bayesian estimation update, thereby realizing accurate measurement of the obstacle and having the characteristics of high accuracy and high speed.

Description

Multi-information fusion obstacle measurement method
Technical Field
The invention relates to the technical field of measurement, in particular to a multi-information fusion obstacle measurement method.
Background
In recent years, intelligent technologies such as unmanned vehicles and robots have become a hot point of research. Among them, road obstacle measurement is an important branch because it determines decision and accuracy of the unmanned vehicles, robots and other agents to complete tasks such as feasible region extraction, path planning, target recognition, etc., whereas for small and medium unmanned vehicles and robots, it is not pedestrians, road signs, lane lines, etc. that affect their performance of related tasks to a greater extent, but small obstacles on the travelling path, such as stones, road surface garbage, etc. In order to ensure that the intelligent object can quantitatively sense the environment, the measurement of parameters such as the maximum width of the obstacle, the distance between the intelligent object and the intelligent object, the azimuth angle and the like is a fundamental and important task, and the application breadth and depth of the medium-sized and small unmanned vehicles and robots are determined.
The laser radar is a commonly used high-precision sensor in the related field, but the laser radar is relatively expensive, the long-distance measurement time point cloud is sparse, the vertical density is low, particularly the sensitivity to obstacles is poor under 16 lines, the problem is more prominent in measuring smaller objects, and the problem is a difficulty in the application field of the laser radar at present. The road obstacle measurement based on the visual information considers the influence of external factors such as illumination, shadow and the like on the measurement precision, an auxiliary algorithm is added for reducing the influence, the complexity of the algorithm is generally improved, and in addition, the precision of the measurement method based on the visual information is relatively low.
Disclosure of Invention
In order to solve the defects in the background art, the invention aims to provide a multi-information fusion obstacle measurement method, which utilizes a laser radar and a depth camera sensor to obtain data of an obstacle by corresponding coordinate transformation, trains the data obtained by the sensor by utilizing an RBF neural network and combines a training result with Bayesian estimation by utilizing a fusion rule.
The aim of the invention can be achieved by the following technical scheme:
a multi-information fusion obstacle measurement method comprises the following steps:
s1, transforming laser radar coordinates into a coordinate system of a camera through coordinate transformation, and acquiring surrounding obstacle information through a depth camera and a laser radar sensor;
s2, training the data of the measured obstacle information by utilizing an RBF neural network;
and S3, formulating a fusion rule, and estimating and updating the motion state of the unmanned vehicle by combining Bayesian estimation updating.
In the step S1, a spatial coordinate system is first established on the robot itself, and the data of the laser radar is projected under the pixel coordinate system through the conversion of the coordinate system;
p point coordinates (X) in camera coordinate system c ,Y c ,Z c ) And P point coordinates (X) in a radar coordinate system r ,Y r ) The relationship of (2) is as follows:
Figure BDA0003241751990000021
Figure BDA0003241751990000022
in the formulas (1) and (2), H c H is the height of the center of the camera from the ground r The height between the center of the laser radar sensor and the ground is L, and the distance difference between the center of the camera and the center of the laser radar sensor in the transverse direction is L;
the coordinates of the P point in the image coordinate system are obtained by the formulas (1) and (2):
Figure BDA0003241751990000023
in the formula (3), f is the focal length of the camera, so that conversion between the image and the pixel coordinates can be realized, and the position (u ', v') of the point P on the photo is obtained:
Figure BDA0003241751990000031
in the formula (4), u 0 ,v 0 Representing the coordinates of the origin of the coordinate system of the image, respectively. dx and dy represent the displacement of the pixel point in both the u and v axes.
Further, in step S2, the RBF neural network includes an input layer, a hidden layer, and an output layer, the transformation from the output layer to the hidden layer is nonlinear, the output layer trains and learns data by a radial basis function, the transformation from the hidden layer to the output layer is linear transformation, and the output of the network is a linear weighted sum of hidden unit outputs.
Further, the input layer contains data of the laser radar and the depth camera, the radial base layer contains the center of a basis function, the obtained result is sent to the linear layer through the radial basis function, and a predicted value of the data is obtained through linear transformation;
the RBF neural network uses a radial basis function method, and the activation function of the radial basis neural network can be expressed as:
Figure BDA0003241751990000032
in the formula (5), x p Representing the input vector, c i Is the center of the Gaussian function, and sigma is the variance of the Gaussian function;
x p the structure of the radial basis function neural network can obtain network output as follows:
Figure BDA0003241751990000033
in formula (6), w ij Represents the connection weight of the hidden layer to the output layer and j=1, 2,..n;
the loss function representation using least squares:
Figure BDA0003241751990000034
in the formula (7), d is a desired value, and sigma is a variance of a Gaussian function; j=1, 2, m; m is the number of input vectors;
furthermore, the output layer uses the learning method of the self-organizing selection center to finish in two steps, the first step of unsupervised learning process calculates the variance of the basis function, the second step of supervised learning process calculates the weight from the radial base layer to the linear layer, and the specific algorithm is as follows:
firstly, selecting h centers to perform k-means clustering, and solving the variance of radial basis of a Gaussian kernel function by a formula:
Figure BDA0003241751990000041
in the formula (8), c max Maximum distance between selected center points;
secondly, calculating the weight from the radial base layer to the linear layer, and simplifying the obtained formula as follows:
Figure BDA0003241751990000042
in formula (9), p=1, 2, P; i=1, 2,. -%, h; p is the number of input vectors;
further, in step S3, the fusion model of the laser radar detection area and the depth camera detection area is:
Figure BDA0003241751990000043
in the formula (10), zone laster Zone for laser radar detection camera G for the area detected by the depth camera laster (i, j) and g camera (i, j) represents the matrix of the two corresponding regions, q (g) laster (i,j),g camera (i, j)) represents a fusion rule;
the Bayesian estimation update finds the system posterior probability density according to the following steps:
first, the prior probability density p (c) at t-1 time is utilized t-1 |z 1:t-1 ) Calculating p (x) t |z 1:t-1 ) Let x be herein t And x only t-1 Related and p (x t-1 |z 1:t-1 ) It is known that:
p(x t |z 1:t-1 )=∫ p (x ct |x t-1 )p(x t-1 |z 1:t-1 )dx t-1 (11),
then observed value z at time t t P (x) t |z 1:t-1 ) Correcting to obtain the system posterior probability density p (x) t |z 1:t ):
Figure BDA0003241751990000044
z t Can be obtained as independent values:
Figure BDA0003241751990000051
transforming the above formula by combining the distribution probability formula and the conditional probability formula:
Figure BDA0003241751990000052
let each observation be independent, obtain the final posterior probability:
Figure BDA0003241751990000053
in the formulae (11) - (15), p (z) t |x t ) For likelihood probability of system observation equation, p (x t |x 1:t-1 ) For a priori probability, p (z t |z 1:t-1 ) A constant normalized for the end of the equation;
and fusing the Bayesian estimation with the previous region fusion rule to obtain fused environmental obstacle data.
The invention has the beneficial effects that:
the invention utilizes the laser radar and the depth camera sensor, measures the distance of the obstacle through corresponding coordinate transformation, trains the data obtained by the sensor by utilizing the RBF neural network through selecting proper transfer function and learning algorithm, and shows that the distance measurement error of the result obtained by the method is lower than 0.1 percent and the speed is improved by 24 percent compared with the traditional bp algorithm through simulation verification. And the motion state of the unmanned vehicle is estimated and updated by formulating corresponding fusion rules and combining Bayesian estimation update, so that the accurate measurement of the obstacle is realized, the method has the characteristics of high accuracy and high speed, and the unmanned vehicle has greater advantages in the aspect of small obstacle detection.
Drawings
The invention is further described below with reference to the accompanying drawings.
FIG. 1 is a flow chart of obstacle information detection in accordance with an embodiment of the present invention;
FIG. 2 is a schematic diagram of a coordinate system position relationship according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an RBF neural network according to an embodiment of the present invention;
FIG. 4 is a diagram of data fusion rules according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The flow chart of the measuring method is shown in fig. 1, firstly, the laser radar coordinate is transformed under the coordinate system of a camera through the transformation of the coordinate system, then, the measured data is trained by an RBF neural network and matched with a certain learning algorithm, finally, a trained predicted value is obtained, the predicted value is put into a fusion rule, and surrounding obstacle information can be obtained through the combination of Bayesian estimation.
1. And transforming the laser radar coordinates into a coordinate system of the camera through coordinate transformation, and acquiring information of surrounding obstacles through the depth camera and the laser radar sensor.
O in FIG. 1 r Represented by the center, O, of the lidar sensor C Representing the center of the camera, and the two sensors are fixed in installation position and have a certain height to the ground, and the height difference is H C -H r The distance difference in the transverse direction is L, since the lidar scanning plane is parallel to the cross-section of the camera, i.e. X r O r Y r And X c O c Y C Parallel. We can obtain the P-point coordinates (X C ,Y C ,Z C ) And P point coordinates (X r ,Y r ) Is the relation of:
Figure BDA0003241751990000061
Figure BDA0003241751990000062
by the above formula, we can obtain the coordinates of the P point in the image coordinate system:
Figure BDA0003241751990000071
then, the conversion between the image and the pixel coordinates is realized by the method, and the position (u ', v') of the point P on the photo is obtained:
Figure BDA0003241751990000072
according to the steps, the laser radar coordinate system point P (X r ,Y r ) The conversion to the pixel coordinate system P (u ', v') only contains some internal parameters of the camera, and we can determine these parameters by Zhang Zhengyou method, thus completing the conversion of coordinates.
2. The measured data is trained using the RBF neural network.
In fig. 2, the input layer contains data of the laser radar and the depth camera, the radial base layer contains the center of the basis function, the obtained result is sent to the linear layer through the radial basis function, and the predicted value of the data is obtained through linear transformation.
RBF neural network the Radial Basis Function (RBF) method, the activation function of the radial basis neural network can be expressed as:
Figure BDA0003241751990000073
in the above, x p Representing the input vector, c i As a gaussian functionHeart, σ is the variance of the gaussian function. Wherein x is p The structure of the radial basis function neural network can obtain network output as follows:
Figure BDA0003241751990000074
in the above, w ij The connection weight of the hidden layer to the output layer is indicated and j=1, 2. Finally, the least squares loss function is adopted to represent:
Figure BDA0003241751990000075
in the above formula, d is a desired value, sigma is a variance of a Gaussian function, which controls a radial acting range of the function, adjusts the sensitivity of neurons, and greatly improves the computing capability of the RBF neural network.
From this we can see that the choice of the center of the radial basis largely influences our final predicted value, assuming we have 40 sets of data, we have the center of the first radial basis as one data, and so on, and then the calculation result of the radial basis network can be obtained by using the activation function of the radial basis neural network, and then the variance and the weight of the radial base layer to the linear layer are calculated.
The learning method of the self-organizing selection center is mainly completed in two steps, wherein the first step is an unsupervised learning process, and the variance of a basis function is calculated; and secondly, a supervised learning process is carried out, and the weight from the radial base layer to the linear layer is calculated.
The specific algorithm comprises the steps of selecting h centers for k-means clustering in the first step, and solving the variance of radial basis of a Gaussian kernel function by a formula:
Figure BDA0003241751990000081
in the above, c max Is the maximum distance between the selected center points.
The second step calculates the weight from the radial base layer to the linear layer, and the formula obtained by simplifying is as follows:
Figure BDA0003241751990000082
in the above formula, p=1, 2, P; i=1, 2., h, P is the number of input vectors.
3. And (3) formulating a corresponding fusion rule, and estimating and updating the motion state of the unmanned vehicle by combining Bayesian estimation updating.
Let us assume that the area detected by the lidar is Zone laster The area detected by the depth camera is Zone camera ,g laster (i, j) and g camera (i, j) respectively represent the matrix of the two corresponding areas, and store the information of the obstacles of the two areas, and the fusion model between the two areas is as follows:
Figure BDA0003241751990000091
wherein q (g) laster (i,j),g camera (i, j)) represents a fusion rule, the specific rule is shown in FIG. 3, d in FIG. 3 l Represents the distance, d, measured by the laser radar c Representing the distance measured by the depth camera, assuming that the initial state probability density of the robot is p (x 0 |z 0 )=p(x 0 ),x 0 Z is the state at the initial time of the system 0 For the observation value at the initial time, the posterior probability density of the system can be obtained by using the prior probability density p (x) at the time t-1 t-1 |z 1:t-1 ) Calculating p (x) t |z 1:t-1 ) Let x be herein t And x only t-1 Related and p (x t-1 |z 1:t-1 ) It is known that:
p(x t |z 1:t-1 )=∫p(x t |x t-1 )p(x t-1 |z 1:t-1 )dx t-1
then observed value z at time t t P (x) t |z 1:t-1 ) Correcting to obtain the system posterior probability density p (x) t |z 1:t )
Figure BDA0003241751990000092
z t Can be obtained as independent values:
Figure BDA0003241751990000093
transforming the above by combining a distribution probability formula and a conditional probability formula
Figure BDA0003241751990000094
Let each observation be independent, obtain the final posterior probability:
Figure BDA0003241751990000095
in p (z) t |x t ) For likelihood probability of system observation equation, p (x t |z 1:t-1 ) For a priori probability, p (z t |z 1:t-1 ) A constant normalized for the end of the equation. And fusing the Bayesian estimation with the previous region fusion rule to obtain fused environmental obstacle data.
In the description of the present specification, the descriptions of the terms "one embodiment," "example," "specific example," and the like, mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The foregoing has shown and described the basic principles, principal features and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, and that the above embodiments and descriptions are merely illustrative of the principles of the present invention, and various changes and modifications may be made without departing from the spirit and scope of the invention, which is defined in the appended claims.

Claims (4)

1. The obstacle measuring method based on multi-information fusion is characterized by comprising the following steps of:
s1, transforming laser radar coordinates into a coordinate system of a camera through coordinate transformation, and acquiring surrounding obstacle information through a depth camera and a laser radar sensor;
s2, training the data of the measured obstacle information by utilizing an RBF neural network;
s3, formulating a fusion rule, and estimating and updating the motion state of the unmanned vehicle by combining Bayesian estimation and updating:
in the step S1, a space coordinate system is firstly established on a robot of the robot, and data of the laser radar is projected under a pixel coordinate system through conversion of the coordinate system;
p point coordinates (X) in camera coordinate system c ,Y c ,Z c ) And P point coordinates (X r ,Y r ) The relationship of (2) is as follows:
Figure FDA0004238284320000011
Figure FDA0004238284320000012
in the formulas (1) and (2), H c H is the height of the center of the camera from the ground r Is the height between the center of the laser radar sensor and the ground, and L is the middle of the cameraA distance difference between the center of the heart and the center of the lidar sensor in the lateral direction;
the coordinates of the P point in the image coordinate system are obtained by the formulas (1) and (2):
Figure FDA0004238284320000013
in the formula (3), f is the focal length of the camera, so that conversion between the image and the pixel coordinates can be realized, and the position (u ', v') of the point P on the photo is obtained:
Figure FDA0004238284320000021
in the formula (4), u 0 ,v 0 Respectively representing the origin coordinates of the coordinate system of the image, d x And d y Representing the displacement of the pixel point on two planes of the u and v axes;
in the step S3, the fusion model of the laser radar detection area and the depth camera detection area is as follows:
Figure FDA0004238284320000022
in the formula (10), zone laster Zone for laser radar detection camera G for the area detected by the depth camera zaster (i, j) and g camera (i, j) represents the matrix of the two corresponding regions, q (g) laster (i,j),g camera (i, j)) represents a fusion rule;
the Bayesian estimation update finds the system posterior probability density according to the following steps:
first, the prior probability density p (x) at t-1 moment is utilized t-1 |z 1:t-1 ) Calculating p (x) t |z 1:t-1 ),x t And x only t-1 Related and p (x t-1 |z 1:t-1 ) It is known that:
p(x t |z 1:t-1 )=∫p(x t |x t-1 )p(x t-1 |z 1:t-1 )dx t-1 (11),
then observed value z at time t t P (x) t |z 1:t-1 ) Correcting to obtain the system posterior probability density p (x) t |z 1:t ):
Figure FDA0004238284320000023
z t As independent values were obtained:
Figure FDA0004238284320000024
transforming the above formula by combining the distribution probability formula and the conditional probability formula:
Figure FDA0004238284320000025
each observation is independent, resulting in a final posterior probability:
Figure FDA0004238284320000031
in the formulae (11) - (15), p (z) t |x t ) For likelihood probability of system observation equation, p (x t |x 1:t-1 ) For a priori probability, p (z t |z 1:t-1 ) A constant normalized for the end of the equation;
and fusing the Bayesian estimation with the previous region fusion rule to obtain fused environmental obstacle data.
2. The multi-information fusion obstacle measuring method according to claim 1, wherein the RBF neural network in step S2 includes an input layer, a hidden layer, and an output layer, the transformation from the output layer to the hidden layer is nonlinear, the output layer trains and learns data by a radial basis function, the transformation from the hidden layer to the output layer is linear transformation, and the output of the network is a linear weighted sum of hidden unit outputs.
3. The multi-information fusion obstacle measuring method according to claim 2, wherein the input layer contains data of a laser radar and a depth camera, the radial base layer contains a center of a basis function, the obtained result is sent to the linear layer through the radial basis function, and a predicted value of the data is obtained through linear transformation;
the RBF neural network uses a radial basis function method, and the activation function of the radial basis neural network can be expressed as:
Figure FDA0004238284320000032
in the formula (5), x p Representing the input vector, c i Is the center of the Gaussian function, and sigma is the variance of the Gaussian function;
x p the structure of the radial basis function neural network can obtain network output as follows:
Figure FDA0004238284320000033
in formula (6), w ij Represents the connection weight of the hidden layer to the output layer and j=1, 2,..n;
the loss function representation using least squares:
Figure FDA0004238284320000034
in formula (7), d is a desired value, σ is the variance of a gaussian function, j=1, 2,..m; m is the number of input vectors.
4. The multi-information fusion obstacle measurement method according to claim 2, wherein the output layer uses a learning method of a self-organizing selection center to finish in two steps, the first step is an unsupervised learning process, the variance of a basis function is calculated, the second step is a supervised learning process, and the weight from a radial base layer to a linear layer is calculated by the specific algorithm:
firstly, selecting h centers to perform k-means clustering, and solving the variance of radial basis of a Gaussian kernel function by a formula:
Figure FDA0004238284320000041
in the formula (8), c max Maximum distance between selected center points;
secondly, calculating the weight from the radial base layer to the linear layer, and simplifying the obtained formula as follows:
Figure FDA0004238284320000042
in formula (9), p=1, 2,..h.
CN202111020484.2A 2021-09-01 2021-09-01 Multi-information fusion obstacle measurement method Active CN113689502B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111020484.2A CN113689502B (en) 2021-09-01 2021-09-01 Multi-information fusion obstacle measurement method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111020484.2A CN113689502B (en) 2021-09-01 2021-09-01 Multi-information fusion obstacle measurement method

Publications (2)

Publication Number Publication Date
CN113689502A CN113689502A (en) 2021-11-23
CN113689502B true CN113689502B (en) 2023-06-30

Family

ID=78584688

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111020484.2A Active CN113689502B (en) 2021-09-01 2021-09-01 Multi-information fusion obstacle measurement method

Country Status (1)

Country Link
CN (1) CN113689502B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115600158B (en) * 2022-12-08 2023-04-18 奥特贝睿(天津)科技有限公司 Unmanned vehicle multi-sensor fusion method
CN116358561B (en) * 2023-05-31 2023-08-15 自然资源部第一海洋研究所 Unmanned ship obstacle scene reconstruction method based on Bayesian multi-source data fusion

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102141776A (en) * 2011-04-26 2011-08-03 江苏科技大学 Particle filter and RBF identification-based neural network PID control parameter self-setting method
CN108710376A (en) * 2018-06-15 2018-10-26 哈尔滨工业大学 The mobile chassis of SLAM and avoidance based on Multi-sensor Fusion
CN110910498A (en) * 2019-11-21 2020-03-24 大连理工大学 Method for constructing grid map by using laser radar and binocular camera
CN111445170A (en) * 2020-04-30 2020-07-24 天津大学 Autonomous operation system and method for unmanned rolling machine group
CN111680726A (en) * 2020-05-28 2020-09-18 国网上海市电力公司 Transformer fault diagnosis method and system based on neighbor component analysis and k neighbor learning fusion
CN111708368A (en) * 2020-07-07 2020-09-25 上海工程技术大学 Intelligent wheelchair based on fusion of laser and visual SLAM
CN112025729A (en) * 2020-08-31 2020-12-04 杭州电子科技大学 Multifunctional intelligent medical service robot system based on ROS
US10928830B1 (en) * 2019-11-23 2021-02-23 Ha Q Tran Smart vehicle
CN112525202A (en) * 2020-12-21 2021-03-19 北京工商大学 SLAM positioning and navigation method and system based on multi-sensor fusion
CN112868225A (en) * 2017-07-27 2021-05-28 阿里·埃布拉希米·阿夫鲁兹 Method and apparatus for combining data to construct a floor plan
CN113011351A (en) * 2021-03-24 2021-06-22 华南理工大学 Working method of intelligent shopping cart and intelligent shopping cart
CN113096183A (en) * 2021-03-18 2021-07-09 武汉科技大学 Obstacle detection and measurement method based on laser radar and monocular camera
CN113110451A (en) * 2021-04-14 2021-07-13 浙江工业大学 Mobile robot obstacle avoidance method with depth camera and single line laser radar fused

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11131989B2 (en) * 2017-08-02 2021-09-28 Strong Force Iot Portfolio 2016, Llc Systems and methods for data collection including pattern recognition
CN109378043A (en) * 2017-10-13 2019-02-22 北京昆仑医云科技有限公司 Medical image based on patient generates the system and method and medium of diagnosis report

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102141776A (en) * 2011-04-26 2011-08-03 江苏科技大学 Particle filter and RBF identification-based neural network PID control parameter self-setting method
CN112868225A (en) * 2017-07-27 2021-05-28 阿里·埃布拉希米·阿夫鲁兹 Method and apparatus for combining data to construct a floor plan
CN108710376A (en) * 2018-06-15 2018-10-26 哈尔滨工业大学 The mobile chassis of SLAM and avoidance based on Multi-sensor Fusion
CN110910498A (en) * 2019-11-21 2020-03-24 大连理工大学 Method for constructing grid map by using laser radar and binocular camera
US10928830B1 (en) * 2019-11-23 2021-02-23 Ha Q Tran Smart vehicle
CN111445170A (en) * 2020-04-30 2020-07-24 天津大学 Autonomous operation system and method for unmanned rolling machine group
CN111680726A (en) * 2020-05-28 2020-09-18 国网上海市电力公司 Transformer fault diagnosis method and system based on neighbor component analysis and k neighbor learning fusion
CN111708368A (en) * 2020-07-07 2020-09-25 上海工程技术大学 Intelligent wheelchair based on fusion of laser and visual SLAM
CN112025729A (en) * 2020-08-31 2020-12-04 杭州电子科技大学 Multifunctional intelligent medical service robot system based on ROS
CN112525202A (en) * 2020-12-21 2021-03-19 北京工商大学 SLAM positioning and navigation method and system based on multi-sensor fusion
CN113096183A (en) * 2021-03-18 2021-07-09 武汉科技大学 Obstacle detection and measurement method based on laser radar and monocular camera
CN113011351A (en) * 2021-03-24 2021-06-22 华南理工大学 Working method of intelligent shopping cart and intelligent shopping cart
CN113110451A (en) * 2021-04-14 2021-07-13 浙江工业大学 Mobile robot obstacle avoidance method with depth camera and single line laser radar fused

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
Information Fusion of Ultrasonic Sensor Based on RBF Network in Obstacle-Avoidance System of Mobile Robot;Wei Huang 等;《Applied Mechanics and Materials》;791-795 *
Optimized Radial Basis Function Neural Network Based Intelligent Control Algorithm of Unmanned Surface Vehicles;Renqiang Wang 等;《Journal of Marine Science and Engineering》;1-13 *
基于多传感器信息融合的轮式机器人室内避障的研究;马信源;《中国优秀硕士学位论文全文数据库 信息科技辑》;I140-872 *
基于深度图像的室内移动机器人行人跟随;程熙;《中国优秀硕士学位论文全文数据库 信息科技辑》;I138-2857 *
基于激光雷达的鲁棒性地图构建与小型障碍物测量算法研究;张国强;《中国优秀硕士学位论文全文数据库 信息科技辑》;I136-457 *
基于视觉的机器人姿态测量;刘康;《中国优秀硕士学位论文全文数据库 信息科技辑》;I140-198 *
多传感器数据融合的障碍物测量方法;刘卿卿 等;《单片机与嵌入式系统应用》;55-59 *

Also Published As

Publication number Publication date
CN113689502A (en) 2021-11-23

Similar Documents

Publication Publication Date Title
CN110645974B (en) Mobile robot indoor map construction method fusing multiple sensors
CN107065890B (en) Intelligent obstacle avoidance method and system for unmanned vehicle
CN111076733B (en) Robot indoor map building method and system based on vision and laser slam
CN113689502B (en) Multi-information fusion obstacle measurement method
Konolige et al. FrameSLAM: From bundle adjustment to real-time visual mapping
CN110263607B (en) Road-level global environment map generation method for unmanned driving
Engel et al. Deeplocalization: Landmark-based self-localization with deep neural networks
Thormann et al. Extended target tracking using Gaussian processes with high-resolution automotive radar
CN111880191B (en) Map generation method based on multi-agent laser radar and visual information fusion
CN114998276B (en) Robot dynamic obstacle real-time detection method based on three-dimensional point cloud
CN114237235B (en) Mobile robot obstacle avoidance method based on deep reinforcement learning
CN114325634A (en) Method for extracting passable area in high-robustness field environment based on laser radar
CN111998862A (en) Dense binocular SLAM method based on BNN
CN112052802A (en) Front vehicle behavior identification method based on machine vision
CN111123953B (en) Particle-based mobile robot group under artificial intelligence big data and control method thereof
CN112668469A (en) Multi-target detection and identification method based on deep learning
Abramov et al. A flexible modeling approach for robust multi-lane road estimation
CN114119659A (en) Multi-sensor fusion target tracking method
CN113160280A (en) Dynamic multi-target tracking method based on laser radar
CN116758153A (en) Multi-factor graph-based back-end optimization method for accurate pose acquisition of robot
CN115950414A (en) Adaptive multi-fusion SLAM method for different sensor data
Forkel et al. Dynamic resolution terrain estimation for autonomous (dirt) road driving fusing lidar and vision
CN112241002B (en) Novel robust closed-loop detection method based on Karto SLAM
CN103345762A (en) Bayes visual tracking method based on manifold learning
Han et al. GardenMap: Static point cloud mapping for Garden environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant