CN113418528A - Intelligent automobile-oriented traffic scene semantic modeling device, modeling method and positioning method - Google Patents
Intelligent automobile-oriented traffic scene semantic modeling device, modeling method and positioning method Download PDFInfo
- Publication number
- CN113418528A CN113418528A CN202110604605.1A CN202110604605A CN113418528A CN 113418528 A CN113418528 A CN 113418528A CN 202110604605 A CN202110604605 A CN 202110604605A CN 113418528 A CN113418528 A CN 113418528A
- Authority
- CN
- China
- Prior art keywords
- semantic
- layer
- scene
- point cloud
- traffic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 238000013135 deep learning Methods 0.000 claims abstract description 6
- 239000011159 matrix material Substances 0.000 claims description 16
- 230000004927 fusion Effects 0.000 claims description 13
- 238000001914 filtration Methods 0.000 claims description 11
- 238000010276 construction Methods 0.000 claims description 10
- 238000005259 measurement Methods 0.000 claims description 8
- 238000012545 processing Methods 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 5
- 238000013527 convolutional neural network Methods 0.000 claims description 5
- 238000009616 inductively coupled plasma Methods 0.000 claims description 5
- 230000008569 process Effects 0.000 claims description 5
- 230000006870 function Effects 0.000 claims description 4
- 238000013507 mapping Methods 0.000 claims description 4
- 238000011176 pooling Methods 0.000 claims description 4
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 238000005457 optimization Methods 0.000 claims description 3
- 150000001875 compounds Chemical class 0.000 claims description 2
- 230000000694 effects Effects 0.000 claims description 2
- 238000011156 evaluation Methods 0.000 claims description 2
- 230000002093 peripheral effect Effects 0.000 claims description 2
- 238000013519 translation Methods 0.000 claims 1
- 238000012512 characterization method Methods 0.000 abstract description 2
- 238000013500 data storage Methods 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 4
- DMBHHRLKUKUOEG-UHFFFAOYSA-N diphenylamine Chemical compound C=1C=CC=CC=1NC1=CC=CC=C1 DMBHHRLKUKUOEG-UHFFFAOYSA-N 0.000 description 4
- 241000274965 Cyrestis thyodamas Species 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 2
- 206010063385 Intellectualisation Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/28—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
- G01C21/30—Map- or contour-matching
- G01C21/32—Structuring or formatting of map data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/08—Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
Landscapes
- Engineering & Computer Science (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Automation & Control Theory (AREA)
- Computer Graphics (AREA)
- Theoretical Computer Science (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses a traffic scene semantic modeling device, a modeling method and a positioning method for an intelligent automobile, which aim to realize high modeling precision and perform multi-scale and multi-level fine characterization on a traffic scene, and comprise the following steps: the system comprises a road position layer, a scene feature layer and a traffic semantic layer. The traffic semantic layer performs semantic recognition on traffic elements in a deep learning mode, and eliminates dynamic targets such as pedestrians and vehicles in a scene, so that the problem of dynamic target interference is solved; the road position layer describes the position relation among the scenes; the scene characteristic layer is used for fully describing the traffic scene on the basis of minimizing data storage. The three methods solve the problem of fine description of the high-precision map on the traffic scene.
Description
Technical Field
The invention belongs to the intelligent automobile technology, and particularly relates to a traffic scene semantic modeling device, a traffic scene semantic modeling method and a traffic scene semantic positioning method for an intelligent automobile.
Background
With the progress of science and technology, intelligent automobiles have gradually become a popular problem in domestic and foreign research. The high-precision map in the field is one of the most key problems for realizing the automobile intellectualization. High-precision map construction is the basis for realizing high-precision positioning, environment perception, decision planning and execution control. The map is different from a common map in construction, the common map only needs to provide high-precision longitude and latitude information, however, the driving requirements of the intelligent automobile cannot be met only by providing the longitude and latitude information, the traffic scene needs to be described on the basis of the common map, and the accuracy of map construction is directly determined by the sufficiency of the description. Therefore, modeling the traffic scene of the intelligent automobile is a core technology for high-precision map construction. In the traffic scene modeling process, pedestrians and vehicles can inevitably appear, and the dynamic target can directly influence the map construction precision.
Disclosure of Invention
Aiming at the problems, the invention provides a traffic scene semantic modeling device and method for an intelligent automobile and a positioning method using the modeling device. In order to ensure the modeling precision, the traffic scene is subjected to multi-scale and multi-level fine characterization, which comprises the following steps: the system comprises a road position layer, a scene feature layer and a traffic semantic layer. The traffic semantic layer performs semantic recognition on traffic elements in a deep learning mode, and eliminates dynamic targets such as pedestrians and vehicles in a scene, so that the problem of dynamic target interference is solved; the road position layer describes the position relation among the scenes; the scene characteristic layer is used for fully describing the traffic scene on the basis of minimizing data storage. The three methods solve the problem of fine description of the high-precision map on the traffic scene.
The technical scheme is as follows:
the invention relates to a traffic scene semantic modeling device for an intelligent automobile, which comprises a multi-source heterogeneous data acquisition system, a multi-source heterogeneous sensor calibration and fusion system and a feature processing system. The multi-source heterogeneous data acquisition system is mainly responsible for acquiring data of traffic scene semantics, and comprises three laser range finders, a Beidou system, a differential Beidou base station and an inertial navigation system, wherein the laser range finders are installed at any position outside an intelligent automobile, the Beidou system is installed at any position of the roof of the intelligent automobile, the differential Beidou base station is installed at a high position, which is not shielded, near a detected road scene, and the inertial navigation system is installed at any position in the intelligent automobile; the multi-source heterogeneous sensor calibration and fusion system is mainly used for fusing multi-semantic features in a traffic scene model and mainly comprises a flat plane calibration plate with a reflection function; the feature processing system mainly serves to extract semantic features of acquired data, mainly serves as a vehicle-mounted industrial personal computer and is installed at any position in an intelligent automobile.
The invention also discloses a traffic scene semantic modeling method for the intelligent automobile, which comprises modeling at a plurality of scales and different levels and comprises the following steps:
(1) data acquisition: (1.1) calibrating multiple sensors; (1.2) multi-source data acquisition and fusion;
(2) multi-scale traffic scene semantic modeling: (2.1) constructing a road position layer; (2.2) constructing a scene feature layer; and (2.3) constructing a traffic semantic layer.
Further, the multi-sensor calibration method in the step (1.1) comprises: a. the method comprises the following steps of placing a smooth calibration plate with a reflection function in front of three laser radars, enabling the three laser radars to shoot at the calibration plate at the same time and obtaining data, wherein the obtaining times are three times. b. Carrying out plane fitting on the data of the calibration plate, and calculating a plane equation of the calibration plate, wherein the plane equations of the three laser radars are respectively shown as the following formula:
a″1x+b″1y+c″1z+d″1=0
in the formula, n isThe plane obtained at the nth time, n ═ 1,2,3, αm,bm,cm,dm(m is 1,2,3) is a coefficient of each of 4 plane equations. c. Calculating the position relation of the three laser radars by taking the calibration plate as a reference plane and passing through RjAnd tjRepresented by the formula:
wherein j is the jth lidar. The calibration is completed by the above formula.
Further, (1.2.1) in the driving process of the intelligent automobile, three laser radars respectively collect laser point clouds on peripheral traffic scenes, and the Beidou receiver synchronously receives longitude and latitude information while receiving the point clouds, and obtains high-precision position information through the inertial navigation system and the Beidou differential base station.
And (1.2.2) mapping the point clouds acquired by three different laser radars to a unified coordinate system according to the result obtained in the step (1.1) of the acquired information, and completing the fusion between different sensors.
In the formula [ Xj Yj Zj]TIs the coordinate of the jth (j ═ 1,2,3) lidar, [ x y z []TIs a coordinate in a unified coordinate system, Rj,tjThe laser radar is a position relation matrix of a unified coordinate system, and the sizes of the position relation matrix are 3 multiplied by 3 and 3 multiplied by 1 respectively.
Further, the specific method for constructing the road position layer in the step (2.1) is as follows:
(2.1.1) calculating the position relation between the point cloud frames, and expressing the position relation by using a rotation matrix A and a translational vector B as shown in the following formula:
in the formula (I), the compound is shown in the specification,the point cloud data of the ith frame has high-precision position information, i is 0,1,2,3 …. A. thei+1,Bi+1A matrix representing the positional relationship between the i +1 th frame and the i-th frame, Ai+1Is a 3 × 3 matrix, Bi+1Is a 3 x 1 vector.
And (2.1.2) optimizing the position relation of the point cloud between frames. And (3) introducing a Kalman filtering idea, taking the laser point cloud as a measurement set and the high-precision Beidou information as an observation set, and finishing the optimization of the position relation of the point cloud between frames. Wherein big dipper information is for observing the collection, as shown in the following formula:
in the formula ZbFor the set of observations, HbTo observe the matrix, WkThe data of the big Dipper is the data of the big Dipper,is the longitude and latitude coordinates of the big dipper data in the last state,the longitude and latitude coordinates of the Beidou data in the current state are obtained; the laser point cloud is a measurement set and is shown as the following formula:
in the formula ZlFor measurement set, E is unit matrix in measurement matrix, LkFusing the laser point cloud data and the laser point cloud data, and adding a state transition matrix F to complete the relationship optimization as shown in the following formula:
further, the specific method for constructing the traffic semantic layer in the step (2.2) is as follows:
(2.2.1) constructing a traffic scene semantic data set, including but not limited to motor vehicles, pedestrians, buildings, trees, traffic signs and the like.
(2.2.2) deep learning is introduced, and the semantics of the traffic scene are distinguished: the traffic scene-oriented convolutional neural network is constructed, the network comprises 34 layers, as shown in fig. 9, in the graph, "3 × 3 conv, 64" means a convolutional layer, a corresponding filter is adopted for the convolutional layer, the size of the convolutional layer is 3 × 3, the layer extracts traffic scene semantic features, 64 is the number of channels, the convolutional layer is used for extracting image features, "avgpool" is an average pooling layer, the features are compressed by the average pooling layer, the effect of simplifying calculation is achieved, fc "is a full-connection layer, the convolution calculation result is marked, and the traffic semantic features are judged. As shown in the following formula:
Y=F(X,{wi})+X
wherein X is a laser point cloud, and F (X, { w) is performed on the laser point cloudi}), w) training process, wiAnd (4) mapping the residual error, and finally outputting Y, namely the semantic property of the traffic scene.
(2.2.3) semantic properties are judged, semantic features of the judged dynamic targets are removed, the dynamic targets include but are not limited to pedestrians, motor vehicles and the like, and the remaining semantic point cloud is a traffic semantic layer.
Further, the specific method for constructing the scene feature layer in the step (2.3) is as follows:
(2.3.1) for the laser radar point cloud, projecting the point cloud onto an XOY plane, drawing a grid in the plane, wherein the grid size can be adaptively adjusted according to the traffic scene characteristic quantity, for example, in a campus scene, the grid can be divided into 4 × 4 sizes;
(2.3.2) counting the point cloud laser points in the grid, and setting a threshold value sigma, wherein the threshold value can be set according to the evaluation experience value of a specific traffic scene, for example, the threshold value set in a campus scene is 64, and when the point cloud points are smaller than the threshold value, filtering the point cloud;
(2.3.3) calculating the height difference of the residual point cloud, setting a point cloud threshold xi, for example, setting a threshold value of 1 in a campus scene, and filtering the point cloud when the height difference is smaller than the threshold value;
(2.3.4) calculating the mean value and the variance of the residual point clouds in the grids, setting a threshold value epsilon, filtering the point clouds when the variance is larger than the threshold value, wherein the residual point clouds are traffic scene features, and completing feature layer construction.
The invention has the beneficial effects that:
1) aiming at the key problems of the intelligent automobile, the invention only uses 3 laser radars, the Beidou system and the inertial navigation system, and simultaneously calibrates and fuses the 3 laser radars only through one plane calibration plate, so that the data acquired at different positions can be fused under one coordinate system, the blind area is reduced to the maximum extent, and the multi-level and multi-scale traffic scene modeling is realized.
2) The invention introduces deep learning thought, constructs a traffic semantic layer and makes semantic classification on traffic scene targets. The classification method can solve the problem of accuracy reduction caused by moving objects during high-accuracy map construction on one hand, and can improve positioning accuracy and efficiency on the other hand.
3) The invention provides a method for extracting characteristics of columnar laser point clouds, which realizes characteristic extraction by calculating the height difference of the numerous and complicated laser point clouds. The problem of difficulty in special extraction in three-dimensional data is solved, the storage space of a high-precision map is greatly reduced, and the map construction and positioning efficiency is improved.
Drawings
FIG. 1 is a block diagram of the overall system of the present invention;
FIG. 2 is a flow chart of a traffic scene semantic modeling method of the present invention;
FIG. 3 is a schematic view of a road location layer of the present invention;
fig. 4 is a Tri-net flow diagram of the present invention;
FIG. 5 is a schematic illustration of laser features of the present invention;
FIG. 6 is a schematic diagram of scene feature layers according to the present invention;
FIG. 7 is a flow chart of the high precision positioning of the present invention;
FIG. 8 shows the high accuracy positioning result of the present invention;
FIG. 9 shows a traffic scene-oriented convolutional neural network constructed according to the present invention
Detailed Description
The technical solution of the present invention is described in detail below, but the scope of the present invention is not limited to the embodiments.
The device can accurately acquire traffic scenes, can realize multi-sensor data fusion after calibration is carried out by using the calibration plate, and finally realizes multi-level and multi-scale traffic scene semantic modeling, and has the characteristics of reducing storage space and improving modeling precision. The structural schematic diagram of the system is shown in fig. 1, and the system comprises a multi-source heterogeneous data acquisition system, a multi-source heterogeneous sensor calibration and fusion system and a feature processing system. The multi-source heterogeneous data acquisition system comprises three laser radars 1,2 and 3, a Beidou system 4, a differential Beidou base station 8 and an inertial navigation system 5. The laser radars 1,2 and 3 can be installed at any position outside the intelligent automobile, for example, the laser radars 1,2 and 3 are respectively installed at the front position, the middle position and the rear position of the automobile in the drawing, the Beidou system 4 is installed at any position of the roof of the automobile, the differential Beidou base station 8 is installed at any open position in a traffic scene, the three laser radars 1,2 and 3 are mutually connected through a network cable, the Beidou system 4 is connected with the inertial navigation system 5 through an RS232 data cable, and the Beidou system 4 is connected with the differential Beidou system 8 through wireless satellite communication; the multi-source heterogeneous sensor calibration and fusion system is mainly a plane calibration plate 9, the calibration plate only needs to meet 1 constraint of a plane, the calibration plate is placed in the scanning range of the laser radar outside the vehicle, and the laser radar scans the calibration plate to realize the correlation between modules; the characteristic processing system mainly comprises a vehicle-mounted industrial personal computer 7 which is arranged at any position in the vehicle, is connected with the laser radar through a network cable and is connected with the inertial navigation system through an RS232 data cable.
The method comprises data acquisition and multi-scale semantic traffic scene modeling, and a flow chart of the method is shown in figure 2, and comprises the following specific steps:
(1) data acquisition
(1.1) data acquisition, drive intelligent automobile to arbitrary road in, open all sensor systems, guarantee simultaneously that laser radar, beidou system all keep relative position unchangeable when data acquisition.
(1.2) place the calibration board in laser radar's offside, make three laser radar scan the calibration board simultaneously, with data transmission to the industrial computer in, fit the plane through the industrial computer, calculate the plane equation of calibration board again, three laser radar's plane equation is shown for the following formula respectively:
a″1x+b″1y+c″1z+d″1=0
in the formula, n is the plane obtained at the nth time, and n is 1,2, 3. Calculating the position relation of the three laser radars by taking the calibration plate as a reference plane and passing through RjAnd tjRepresented by the formula:
wherein j is the jth lidar. The calibration is completed by the above formula.
(1.3) uniformly projecting the three laser radars to the same world coordinate system according to the calibration result, as shown in the following formula:
in the formula [ Xj Yj Zj]TIs the j (j ═ 1, 2)3) coordinates of the lidar, [ x y z]TIs a coordinate in a unified coordinate system, Rj,tjThe laser radar is a position relation matrix of a unified coordinate system, and the sizes of the position relation matrix are 3 multiplied by 3 and 3 multiplied by 1 respectively.
(2) Establishing traffic scene semantic model
The traffic scene is divided into a series of collection nodes, and position information and laser point cloud information are guaranteed to exist in each node. On the basis, each node is divided into a road position layer, a scene feature layer and a depth representation model of a traffic semantic layer.
(2.1) combining the collected Beidou information with the laser point cloud, and calculating the position relation between the nodes by using a closest point iteration method (ICP) without limitation, wherein the position relation is shown as the following formula:
therefore, the position information obtained by the beidou system and the position relationship obtained by the calculation of the formula generate a road position layer, and the road position layer is shown in fig. 3.
(2.2) after a road position layer is generated, a deep learning thought is introduced aiming at a target in laser point cloud, a convolutional neural network named as Tri-net is constructed, as shown in fig. 4, X in the graph is laser point cloud input, a semantic classification result is obtained through nonlinear mapping F (X) + X, and in order to reduce the calculated amount, a short-circuit connection mode is added in the method, and a plurality of network layers are skipped, so that semantic classification judgment is realized, and the efficiency is improved. Semantic types include, but are not limited to, cars, pedestrians, traffic signs, lane lines, buildings, traffic lights, trees, and the like. The Tri-net formula is shown below:
Y=F(X,{wi})+X
wherein X is a laser point cloud, and F (X, { w) is performed on the laser point cloudiAnd } and finally outputting Y, namely the semantics of the traffic scene.
And after semantic classification is obtained, removing dynamic targets such as automobiles, pedestrians and the like in the type, and obtaining the residual point cloud and the semantic label thereof as a traffic semantic layer.
(2.3) after the traffic semantic layer is obtained, projecting the residual laser radar point cloud to an XOY plane, and drawing a grid in the plane; counting the number of point cloud laser points in the grid, setting a threshold value, and filtering out the point cloud in the grid when the number of the point cloud in the grid is lower than the threshold value; calculating the height difference of the point cloud after filtering, establishing a threshold value, and filtering when the height difference of the point cloud is lower than the threshold value; and finally, calculating the mean value and the variance of the residual point cloud, establishing a threshold value for the variance, filtering when the variance is greater than the threshold value to obtain traffic scene characteristics, and completing the construction of a scene characteristic layer as shown in fig. 5, wherein the scene characteristic layer is as shown in fig. 6.
The embodiment applies the constructed traffic scene semantic model, and the main application field is high-precision positioning, and the high-precision positioning process is shown in fig. 7.
(1) And starting the intelligent automobile sensor, and acquiring Beidou information and laser point cloud information.
(2) And extracting the road position layer in the semantic model, matching the road position layer with the Beidou information to obtain a nearest node, drawing a circle by taking the node as the center of the circle and r as the radius, wherein the circle is a positioning range, and all nodes in the circle are candidate points.
(3) And extracting a semantic layer in the semantic model, matching the semantic layer with the laser point cloud in the intelligent automobile, and identifying and matching through a Tri-net neural network to obtain the nearest positioning node.
(4) And extracting a scene characteristic layer in the model, and matching the scene characteristic layer with the laser point cloud in the intelligent automobile, wherein the matching method comprises but is not limited to an ICP (inductively coupled plasma) algorithm, so that the position relation between the current vehicle position and the nearest node is obtained, and the positioning is realized. Fig. 8 shows the positioning result of the intelligent vehicle using the apparatus and method of the present invention in a traffic road, wherein the map contains 100 positioning nodes, the average error of the positioning accuracy is 7.85cm, and the standard deviation is 6.00 cm.
The above-listed series of detailed descriptions are merely specific illustrations of possible embodiments of the present invention, and they are not intended to limit the scope of the present invention, and all equivalent means or modifications that do not depart from the technical spirit of the present invention are intended to be included within the scope of the present invention.
Claims (10)
1. A traffic scene semantic modeling device for an intelligent automobile is characterized by comprising a multi-source heterogeneous data acquisition system, a multi-source heterogeneous sensor calibration and fusion system and a feature processing system; the multi-source heterogeneous data acquisition system comprises three laser range finders, a Beidou system, a differential Beidou base station and an inertial navigation system and is mainly responsible for acquiring data of traffic scene semantics; the multi-source heterogeneous sensor calibration and fusion system is mainly used for fusing multi-semantic features in a traffic scene model; the feature processing system is mainly used for extracting semantic features of the acquired data.
2. The intelligent automobile-oriented traffic scene semantic modeling device as claimed in claim 1, wherein the laser range finder is installed at any position outside the intelligent automobile, the Beidou system is installed at any position on the roof of the automobile, the differential Beidou base station is installed at a high position, which is not shielded, near the detected road scene, and the inertial navigation system is installed at any position inside the intelligent automobile; the multi-source heterogeneous sensor calibration and fusion system mainly comprises a flat plane calibration plate with a reflection function; the characteristic processing system is mainly a vehicle-mounted industrial personal computer and is installed at any position in the intelligent automobile.
3. A traffic scene semantic modeling method for intelligent automobiles is characterized by comprising a plurality of levels of modeling with different scales, and comprises the following steps:
and S1, data acquisition: s1.1, calibrating multiple sensors; s1.2, multi-source data acquisition and fusion;
s2 multi-scale traffic scene semantic modeling: s2.1, constructing a road position layer; s2.2, constructing a traffic semantic layer; s2.3, constructing a scene feature layer.
4. The intelligent automobile-oriented traffic scene semantic modeling method according to claim 3, characterized in that the method for calibrating multiple sensors in step S1.1 is as follows:
s1.1.1, placing a flat calibration plate with a reflection function in front of three laser radars, and simultaneously shooting the three laser radars to the calibration plate and acquiring data, wherein the acquisition times are three times;
s1.1.2, performing plane fitting on the data of the calibration plate, and calculating a plane equation of the calibration plate, wherein the plane equations of the three laser radars are respectively shown as follows:
where n is the plane obtained at the nth time, and n is 1,2,3, am,bm,cm,dm(m is 1,2,3) is the coefficients of 4 plane equations respectively;
s1.1.3, calculating the position relation of the three laser radars by taking the calibration plate as a reference plane and passing through RjAnd tjRepresented by the formula:
in the formula, j is the jth laser radar, and the calibration is completed through the formula.
5. The intelligent automobile-oriented traffic scene semantic modeling method according to claim 3, characterized in that the method for multi-source data acquisition and fusion in S1.2 is as follows:
s1.2.1, in the running process of an intelligent automobile, three laser radars respectively collect laser point clouds on a peripheral traffic scene, a Beidou receiver synchronously receives longitude and latitude information while receiving the point clouds by the laser radars, and high-precision position information is obtained through an inertial navigation system and a Beidou differential base station;
s1.2.2, mapping the point clouds collected by three different laser radars to a unified coordinate system according to the result of calibration in the step S1.1 on the collected information, and completing fusion between different sensors;
in the formula [ Xj Yj Zj]TIs the coordinate of the jth (j ═ 1,2,3) lidar, [ x [ [ x ]i yi zi]TIs a coordinate in a unified coordinate system, Rj,tjThe positions of the laser radar are related to the unified coordinate system, and the sizes are respectively 3 × 3 and 3 × 1.
6. The intelligent automobile-oriented traffic scene semantic modeling method according to claim 3, characterized in that the specific method for constructing the road position layer in the step S2.1 is as follows:
s2.1.1 calculating the position relationship between the point cloud frames, which is represented by the rotation matrix A and the translation vector B, as shown in the following formula:
in the formula (I), the compound is shown in the specification,the point cloud data of the ith frame has high-precision position information, i is 0,1,2,3 …. A. thei+1,Bi+1Indicates the positional relationship between the i +1 th frame and the i-th frame, Ai+1Is a 3 × 3 matrix, Bi+1Is a 3 x 1 vector.
S2.1.2 optimizing the position relationship of the point clouds among the frames by introducing a Kalman filtering thought, taking the laser point clouds as a measurement set and the Beidou information with high precision as an observation set, wherein the Beidou information is the observation set and is shown as the following formula:
in the formula ZbFor the set of observations, HbTo observe the matrix, WkThe Beidou data is obtained; the laser point cloud is a measurement set and is shown as the following formula:
in the formula ZlFor measurement set, E is unit matrix in measurement matrix, LkFor laser point cloud data, the two are fused to complete the optimization of the relation as shown in the following formula:
7. the intelligent automobile-oriented traffic scene semantic modeling method according to claim 3, characterized in that the specific method for constructing the traffic semantic layer in the step S2.2 is as follows:
s2.2.1 constructing traffic scene semantic data sets including, but not limited to, motor vehicles, pedestrians, buildings, trees, traffic signs;
s2.2.2, deep learning is introduced, and the semantics of the traffic scene are distinguished: constructing a convolutional neural network facing to a traffic scene, as shown in the following formula:
Y=F(X,{wi})+X
wherein X is a laser point cloud, and F (X, { w) is performed on the laser point cloudi}), finally outputting Y,the semantic property of the traffic scene is obtained;
s2.2.3, semantic features are judged, semantic features of dynamic targets are removed, the dynamic targets include but are not limited to pedestrians and motor vehicles, and the remaining semantic point cloud is a traffic semantic layer.
8. The intelligent automobile-oriented traffic scene semantic modeling method according to claim 3, characterized in that the specific method for constructing the scene feature layer in the step S2.3 is as follows:
s2.3.1 projecting the point cloud to XOY plane, drawing grid in the plane, the grid size can be self-adaptive adjusted according to the traffic scene feature quantity, for example, in campus scene, the grid can be divided into 4 × 4 size;
s2.3.2, counting the point cloud laser points in the grid, and setting a threshold value sigma, wherein the threshold value can be set according to the evaluation experience value of a specific traffic scene, for example, the threshold value set in a campus scene is 64, and when the point cloud points are less than the threshold value, the point cloud is filtered;
s2.3.3 calculating the height difference of the remaining point cloud, and setting a point cloud threshold xi, for example, setting a threshold 1 in the campus scene, and filtering the point cloud when the height difference is smaller than the threshold;
s2.3.4, calculating the mean value and the variance of the remaining point cloud in the grid, setting a threshold epsilon, filtering the point cloud when the variance is larger than the threshold, wherein the remaining point cloud is the traffic scene feature, and completing the feature layer construction.
9. The intelligent automobile-oriented traffic scene semantic modeling method according to claim 7, characterized in that the convolutional neural network model in S2.2.2 is specifically constructed as follows: the constructed neural network is 34 layers and comprises a convolution layer, an average pooling layer and a full-connection layer, traffic scene semantic features are extracted through the convolution layer, the features include but are not limited to features such as edges and corners of scene targets, the features are compressed through the average pooling layer, the effect of simplifying calculation is achieved, and the full-connection layer marks convolution calculation results, namely the traffic semantic features are judged.
10. The high-precision positioning method based on traffic scene semantic modeling is characterized by comprising the following steps
(1) Starting an intelligent automobile sensor, and acquiring Beidou information and laser point cloud information;
(2) extracting a road position layer in the semantic model, matching the road position layer with Beidou information to obtain a nearest node, drawing a circle by taking the node as a circle center and r as a radius, wherein the circle is a positioning range, and all nodes in the circle are candidate points;
(3) extracting a semantic layer in the semantic model, matching the semantic layer with laser point cloud in the intelligent automobile, and identifying and matching through a Tri-net neural network to obtain a nearest positioning node;
(4) and extracting a scene characteristic layer in the model, and matching the scene characteristic layer with the laser point cloud in the intelligent automobile, wherein the matching method comprises but is not limited to an ICP (inductively coupled plasma) algorithm, so that the position relation between the current vehicle position and the nearest node is obtained, and the positioning is realized.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110604605.1A CN113418528A (en) | 2021-05-31 | 2021-05-31 | Intelligent automobile-oriented traffic scene semantic modeling device, modeling method and positioning method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110604605.1A CN113418528A (en) | 2021-05-31 | 2021-05-31 | Intelligent automobile-oriented traffic scene semantic modeling device, modeling method and positioning method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113418528A true CN113418528A (en) | 2021-09-21 |
Family
ID=77713470
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110604605.1A Pending CN113418528A (en) | 2021-05-31 | 2021-05-31 | Intelligent automobile-oriented traffic scene semantic modeling device, modeling method and positioning method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113418528A (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106802954A (en) * | 2017-01-18 | 2017-06-06 | 中国科学院合肥物质科学研究院 | Unmanned vehicle semanteme cartographic model construction method and its application process on unmanned vehicle |
CN110057373A (en) * | 2019-04-22 | 2019-07-26 | 上海蔚来汽车有限公司 | For generating the method, apparatus and computer storage medium of fine semanteme map |
CN112082565A (en) * | 2020-07-30 | 2020-12-15 | 西安交通大学 | Method, device and storage medium for location and navigation without support |
CN112484725A (en) * | 2020-11-23 | 2021-03-12 | 吉林大学 | Intelligent automobile high-precision positioning and space-time situation safety method based on multi-sensor fusion |
-
2021
- 2021-05-31 CN CN202110604605.1A patent/CN113418528A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106802954A (en) * | 2017-01-18 | 2017-06-06 | 中国科学院合肥物质科学研究院 | Unmanned vehicle semanteme cartographic model construction method and its application process on unmanned vehicle |
CN110057373A (en) * | 2019-04-22 | 2019-07-26 | 上海蔚来汽车有限公司 | For generating the method, apparatus and computer storage medium of fine semanteme map |
CN112082565A (en) * | 2020-07-30 | 2020-12-15 | 西安交通大学 | Method, device and storage medium for location and navigation without support |
CN112484725A (en) * | 2020-11-23 | 2021-03-12 | 吉林大学 | Intelligent automobile high-precision positioning and space-time situation safety method based on multi-sensor fusion |
Non-Patent Citations (1)
Title |
---|
惠振阳;程朋根;官云兰;聂运菊;: "机载LiDAR点云滤波综述", 激光与光电子学进展, no. 06 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110988912B (en) | Road target and distance detection method, system and device for automatic driving vehicle | |
CN108955702B (en) | Lane-level map creation system based on three-dimensional laser and GPS inertial navigation system | |
CA3027921C (en) | Integrated sensor calibration in natural scenes | |
CN110531376B (en) | Obstacle detection and tracking method for port unmanned vehicle | |
CN111369541B (en) | Vehicle detection method for intelligent automobile under severe weather condition | |
CN109596078A (en) | Multi-information fusion spectrum of road surface roughness real-time testing system and test method | |
CN115032651B (en) | Target detection method based on laser radar and machine vision fusion | |
CN113126115B (en) | Semantic SLAM method and device based on point cloud, electronic equipment and storage medium | |
US11430087B2 (en) | Using maps comprising covariances in multi-resolution voxels | |
US11288861B2 (en) | Maps comprising covariances in multi-resolution voxels | |
CN110349192A (en) | A kind of tracking of the online Target Tracking System based on three-dimensional laser point cloud | |
CN115943439A (en) | Multi-target vehicle detection and re-identification method based on radar vision fusion | |
CN111461048B (en) | Vision-based parking lot drivable area detection and local map construction method | |
CN111860072A (en) | Parking control method and device, computer equipment and computer readable storage medium | |
CN113848545B (en) | Fusion target detection and tracking method based on vision and millimeter wave radar | |
CN115451948A (en) | Agricultural unmanned vehicle positioning odometer method and system based on multi-sensor fusion | |
CN117058646B (en) | Complex road target detection method based on multi-mode fusion aerial view | |
CN115128628A (en) | Road grid map construction method based on laser SLAM and monocular vision | |
CN113724387A (en) | Laser and camera fused map construction method | |
CN209214563U (en) | Multi-information fusion spectrum of road surface roughness real-time testing system | |
CN113536959A (en) | Dynamic obstacle detection method based on stereoscopic vision | |
CN115273068B (en) | Laser point cloud dynamic obstacle removing method and device and electronic equipment | |
CN113418528A (en) | Intelligent automobile-oriented traffic scene semantic modeling device, modeling method and positioning method | |
CN115184909B (en) | Vehicle-mounted multi-spectral laser radar calibration system and method based on target detection | |
CN114924288A (en) | System and method for constructing vehicle front three-dimensional digital elevation map |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |