CN114037707A - Network bandwidth self-adaptive automatic driving point cloud data cooperative sensing system - Google Patents

Network bandwidth self-adaptive automatic driving point cloud data cooperative sensing system Download PDF

Info

Publication number
CN114037707A
CN114037707A CN202111270795.4A CN202111270795A CN114037707A CN 114037707 A CN114037707 A CN 114037707A CN 202111270795 A CN202111270795 A CN 202111270795A CN 114037707 A CN114037707 A CN 114037707A
Authority
CN
China
Prior art keywords
data
point cloud
target detection
sensing
task module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111270795.4A
Other languages
Chinese (zh)
Inventor
李克秋
刘熙来
周晓波
谢琦
邱铁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN202111270795.4A priority Critical patent/CN114037707A/en
Publication of CN114037707A publication Critical patent/CN114037707A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
    • H04W4/46Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for vehicle-to-vehicle communication [V2V]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention discloses a network bandwidth self-adaptive automatic driving point cloud data cooperative sensing system, wherein S1 a sensing data sending unit transmits acquired 3D original point cloud data to a first target detection task module; s2, processing the 3D original point cloud data by the first target detection task module point cloud segmentation layer to generate segmented point cloud sensing data; s3, the second target detection task module generates point cloud sensing data after registration through a data registration layer according to coordinate steering and displacement calibration; s4, the second target detection task module fuses the registered 3D point cloud sensing data and the 3D original point cloud data of the second target detection task module through a point cloud feature fusion layer to generate fused point cloud data; and S5, the perception data receiving unit performs feature extraction, classification and regression operation on the fused point cloud data and outputs target data.

Description

Network bandwidth self-adaptive automatic driving point cloud data cooperative sensing system
Technical Field
The invention mainly relates to the technical field of wireless communication of internet of vehicles, in particular to a network bandwidth self-adaptive automatic driving point cloud data cooperative sensing system.
Background
It is important for an autonomous vehicle to be able to accurately sense the surrounding traffic environment in real time. At present, the perception of the surrounding environment of the automatic driving vehicle mainly depends on various advanced sensor devices equipped on the vehicle, such as a camera, a millimeter wave radar, a laser radar and the like. However, in any sensor device, there is a possibility that sensing fails due to factors such as damage of the sensor device, obstruction of road obstacles, limited sensing range of the sensor device, or influence of weather conditions, and thus, the sensing capability of the bicycle alone is far from meeting the extremely high safety requirement of automatic driving. With the development of wireless communication technology, it is proposed that sensing data can be shared between vehicles by using V2V wireless communication technology to expand the sensing range of vehicles, and we call this technology "cooperative sensing".
Existing work on collaborative awareness is mainly divided into three categories according to the type of data shared: based on raw data, based on feature data and based on cooperative sensing of the resulting data. For cooperative sensing based on the original data, the original sensor data which is not processed is shared among the vehicles, so that the information of a real scene can be retained to the maximum extent, more complete sensing data can be provided for the vehicle at the receiving party, and the sensing capability of the vehicle at the receiving party can be maximized. And based on the cooperative sensing of the characteristic data and the result data, in order to reduce the transmission of data quantity between networks, the original point cloud data is input into a target detection model to distinguish and output intermediate characteristic data and a detection result for transmission. Although the two modes can reduce the data volume transmitted in the network, the detection precision requirement on the vehicle model is extremely high, and the perception capability of a single vehicle is excessively depended on. Therefore, in an actual environment, a wireless channel is changed from moment to moment, if all point cloud data are transmitted at once, the transmission data volume in the network is too large, the transmission process cannot adapt to the change of the network, transmission failure is caused, and the perception of the vehicle to the surrounding environment is influenced
Disclosure of Invention
Aiming at the problem that the existing cooperative sensing method based on original point cloud data cannot adapt to the dynamic change of wireless channel bandwidth, the invention provides a network bandwidth self-adaptive automatic driving point cloud data cooperative sensing system; the invention aims at the typical application in a perception system, namely 3D target detection, and improves the perception precision in enlarging the perception range of the vehicle; meanwhile, the method can adapt to the dynamic change of the wireless network bandwidth, ensures the real-time performance of target detection, and ensures the authenticity and the accuracy of the information carried by the transmitted data between vehicles by adopting a mode of transmitting the original point cloud data between vehicles.
The invention is implemented by adopting the following technical scheme:
a network bandwidth self-adaptive automatic driving point cloud data cooperative sensing system comprises a sensing data sending unit, a first target detection task module, a second target detection task module and a sensing data receiving unit; the sensing data sending unit transmits the point cloud sensing data to the sensing data receiving unit through a V2V wireless data channel; the method comprises the following steps:
s1, the perception data sending unit transmits the acquired 3D original point cloud data to a first target detection task module;
s2, processing the 3D original point cloud data by the first target detection task module point cloud segmentation layer to generate segmented point cloud sensing data;
s3, the second target detection task module generates registered feature sensing data through the data registration layer according to the coordinate steering and displacement calibration of the point cloud sensing data; wherein:
201. calculating and generating a rotation matrix R according to the characteristic sensing data by the following formula;
R=Rzyaw)Rypitch)Rxroll)
in the formula [ theta ]yawpitchrollRespectively are the difference values of a yaw angle, a pitch angle and a roll angle;
202. calibrating the coordinate steering and displacement of the 3D original point cloud data of the perception data sending unit according to the following formula;
Figure BDA0003327923340000021
in the formula (X)s,Ys,Zs) And (X)s′,Ys′,Zs') represents the coordinate systems in which the pre-and post-registration transmission data are located, respectively, (Δ d)x,Δdy,Δdz) Representing the difference in displacement between the transmitted and received data;
s4, the second target detection task module fuses the registered feature perception data and the 3D original point cloud data of the second target detection task module through a point cloud feature fusion layer to generate fused point cloud data;
s5, the perception data receiving unit performs feature extraction, classification and regression operation on the fused point cloud data, and outputs target data, wherein: the fused point cloud sensing data is obtained through the following formula:
Pf=Pr∪Ps
in the formula Pf,Pr,Ps' respectively represents fused point cloud data, receiver original data and calibrated sender point cloud data.
Further, the point cloud feature segmentation layer processes the 3D original point cloud data through an angle segmentation method or a point density segmentation method so as to reduce the amount of the point cloud data needing to be transmitted.
Advantageous effects
1. The invention provides a set of end-to-end automatic driving original point cloud level cooperative sensing framework, which can support the sharing and fusion of multi-vehicle original point cloud data, and achieves the effects of expanding the vehicle sensing range and improving the vehicle sensing precision.
2. The invention provides a bandwidth self-adaptive data segmentation algorithm and provides a segmentation scheme of original point cloud data, and optimal perception precision is achieved on the premise of guaranteeing timeliness by self-adaptively adjusting perception data quantity shared among vehicles.
3. The method can be suitable for various 3D target detection models and support intelligent networked vehicles with different computing capabilities.
Drawings
FIG. 1 is a flow chart of point cloud based 3D object detection;
FIG. 2 is a flow diagram of a cooperative sensing system;
FIG. 3 is a diagram of perceptual data segmentation;
FIG. 4 is a schematic diagram of perceptual data registration;
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the following detailed discussion of the present invention will be made with reference to the accompanying drawings and examples, which are only illustrative and not limiting, and the scope of the present invention is not limited thereby.
As shown in fig. 1 and 2, the invention provides a network bandwidth adaptive automatic driving point cloud data cooperative sensing system, which includes a sensing data sending unit, a first target detection task module, a second target detection task module and a sensing data receiving unit; the sensing data sending unit transmits the point cloud sensing data to the sensing data receiving unit through a V2V wireless data channel; the method comprises the following steps:
s1, the perception data sending unit transmits the acquired 3D original point cloud data to a first target detection task module;
s2, processing the 3D original point cloud data by the first target detection task module point cloud segmentation layer to generate segmented point cloud sensing data; the point cloud segmentation layer processes the 3D original point cloud data through an angle segmentation or point density segmentation method so as to reduce the amount of the point cloud data needing to be transmitted;
s3, the second target detection task module generates registered feature sensing data through the data registration layer according to the coordinate steering and displacement calibration of the point cloud sensing data; wherein:
201. calculating and generating a rotation matrix R according to the characteristic sensing data by the following formula;
R=Rzyaw)Rypitch)Rxroll)
in the formula [ theta ]yawpitchrollRespectively are the difference values of a yaw angle, a pitch angle and a roll angle;
202. calibrating the coordinate steering and displacement of the 3D original point cloud data of the perception data sending unit according to the following formula;
Figure BDA0003327923340000041
in the formula (X)s,Ys,Zs) And (X)s′,Ys′,Zs') represents the coordinate systems in which the pre-and post-registration transmission data are located, respectively, (Δ d)x,Δdy,Δdz) Representing the difference in displacement between the transmitted and received data;
s4, the second target detection task module fuses the registered feature perception data and the 3D original point cloud data of the second target detection task module through a point cloud feature fusion layer to generate the registered feature perception data;
s5, the perception data receiving unit performs feature extraction, classification and regression operation on the fused point cloud data, and outputs target data, wherein: the fused point cloud sensing data is obtained through the following formula:
Pf=Pr∪Ps
in the formula Pf,Pr,Ps' respectively represents fused point cloud data, receiver original data and calibrated sender point cloud data.
The point cloud segmentation layer processes the 3D original point cloud data through an angle segmentation method or a point density segmentation method so as to reduce the amount of the point cloud data needing to be transmitted.
The practical application of the invention is as follows:
step 1: the method comprises the steps that a vehicle at a sending party calculates the proportion of original sensing data transmitted by the next frame under the current bandwidth according to the current channel condition, the problem is modeled into a linear programming problem, the target is to enable the final detection precision of a cooperative target to be the highest, as shown in formula (1), and the precondition is to meet the real-time performance of target detection, namely the frame rate is consistent with the sampling rate of a laser radar.
maxαα·fl,m (1)
s.t.te2e≤Δt,
0≤α≤1,
Wherein, te2eRepresenting the end-to-end time delay of the whole cooperative sensing system, namely the time from the point cloud data acquisition of the laser radar by the sender to the target detection result acquisition of the fused vehicle by the receiver, te2eThe specific calculation method of (3) is shown in formula (2).
te2e=ts1+traw+tr2+tr3,,te2e<T(2)
Step 2: the sender vehicle divides the original point cloud data according to the calculated data proportion alpha and shares the perception data, the data division can adopt two modes, namely angle-based division and point density-based division, as shown in fig. 3, under the condition of angle division, because the view field in the real front of the vehicle is relatively important, the original data is put in the middle; in the case of density segmentation, since points are sparser the farther away, which is the key for enhancing the perception capability of cooperative sensing, it is suggested to place the original data at a position where the point density is sparser.
And step 3: the receiver registers the received sensing data (as shown in fig. 2), and calculates a rotation matrix according to the data of the GPS and IMU of the two vehicles, and unifies the coordinate systems of the two vehicles, where the rotation matrix R is calculated by formula (3), where θ isyawpitchrollThe difference values of the yaw angle, the pitch angle and the roll angle are respectively.
R=Rzyaw)Rypitch)Rxroll)(3)
Figure BDA0003327923340000051
And (3) calibrating the steering and displacement of all coordinates of the data of the sender, wherein the calculation method is shown as a formula (4), and the formula (X) iss,Ys,Zs) And (X)s′,Ys′,Zs') represents the coordinate system in which the sender data before and after registration, respectively, (Δ d)x,Δdy,Δdz) Indicating the displacement difference between the two vehicles;
and 4, step 4: fusing the calibrated original point cloud data with original point cloud data acquired from a laser radar by the user, and performing feature extraction on the fused data, wherein: the raw data fusion expression is formula (5)
Pf=Pr∪Ps′ (5)
In the formula Pf,Pr,Ps' respectively represents fused raw data, receiver raw data and registered sender raw data.
And 5: and inputting the fused original data into a detection model of the vehicle to obtain a final 3D target detection result.
The present invention is not limited to the above-described embodiments. The foregoing description of the specific embodiments is intended to describe and illustrate the technical solutions of the present invention, and the above specific embodiments are merely illustrative and not restrictive. Those skilled in the art can make many changes and modifications to the invention without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (2)

1. A network bandwidth self-adaptive automatic driving point cloud data cooperative perception system is characterized in that: the cooperative sensing system comprises a sensing data sending unit, a first target detection task module, a second target detection task module and a sensing data receiving unit; the sensing data sending unit transmits the point cloud sensing data to the sensing data receiving unit through a V2V wireless data channel; the method comprises the following steps:
s1, the perception data sending unit transmits the acquired 3D original point cloud data to a first target detection task module;
s2, processing the 3D original point cloud data by the first target detection task module point cloud segmentation layer to generate segmented point cloud sensing data;
s3, the second target detection task module generates point cloud sensing data after registration through a data registration layer according to coordinate steering and displacement calibration; wherein:
301. calculating and generating a rotation matrix R according to the point cloud sensing data by the following formula;
R=Rzyaw)Rypitch)Rxroll)
in the formula [ theta ]yawpitchrollRespectively are the difference values of a yaw angle, a pitch angle and a roll angle;
302. calibrating the coordinate steering and displacement of the 3D original point cloud data of the perception data sending unit according to the following formula;
Figure FDA0003327923330000011
in the formula (X)s,Ys,Zs) And (X)s′,Ys′,Zs') represents the coordinate systems in which the pre-and post-registration transmission data are located, respectively, (Δ d)x,Δdy,Δdz) Representing the difference in displacement between the transmitted and received data;
s4, the second target detection task module fuses the registered 3D point cloud sensing data and the 3D original point cloud data of the second target detection task module through a point cloud feature fusion layer to generate fused point cloud data;
s5, the perception data receiving unit performs feature extraction, classification and regression operation on the fused point cloud data, and outputs target data, wherein: the fused point cloud sensing data is obtained through the following formula:
Pf=Pr∪Ps
in the formula Pf,Pr,Ps' respectively represents fused point cloud data, receiver original data and calibrated sender point cloud data.
2. The network bandwidth adaptive automatic driving point cloud data collaborative perception system according to claim 1 is characterized in that: the point cloud segmentation layer processes the 3D original point cloud data through an angle segmentation method or a point density segmentation method so as to reduce the amount of the point cloud data needing to be transmitted.
CN202111270795.4A 2021-10-29 2021-10-29 Network bandwidth self-adaptive automatic driving point cloud data cooperative sensing system Pending CN114037707A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111270795.4A CN114037707A (en) 2021-10-29 2021-10-29 Network bandwidth self-adaptive automatic driving point cloud data cooperative sensing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111270795.4A CN114037707A (en) 2021-10-29 2021-10-29 Network bandwidth self-adaptive automatic driving point cloud data cooperative sensing system

Publications (1)

Publication Number Publication Date
CN114037707A true CN114037707A (en) 2022-02-11

Family

ID=80135785

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111270795.4A Pending CN114037707A (en) 2021-10-29 2021-10-29 Network bandwidth self-adaptive automatic driving point cloud data cooperative sensing system

Country Status (1)

Country Link
CN (1) CN114037707A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114579556A (en) * 2022-05-05 2022-06-03 中汽创智科技有限公司 Data processing method, device, equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020216316A1 (en) * 2019-04-26 2020-10-29 纵目科技(上海)股份有限公司 Driver assistance system and method based on millimetre wave radar, terminal, and medium
CN113490178A (en) * 2021-06-18 2021-10-08 天津大学 Intelligent networking vehicle multistage cooperative sensing system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020216316A1 (en) * 2019-04-26 2020-10-29 纵目科技(上海)股份有限公司 Driver assistance system and method based on millimetre wave radar, terminal, and medium
CN113490178A (en) * 2021-06-18 2021-10-08 天津大学 Intelligent networking vehicle multistage cooperative sensing system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114579556A (en) * 2022-05-05 2022-06-03 中汽创智科技有限公司 Data processing method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109920246B (en) Collaborative local path planning method based on V2X communication and binocular vision
CN108574929B (en) Method and apparatus for networked scene rendering and enhancement in an onboard environment in an autonomous driving system
CN105711597B (en) Front locally travels context aware systems and method
CN112712717B (en) Information fusion method, device and equipment
US10867404B2 (en) Distance estimation using machine learning
CN106896393B (en) Vehicle cooperative type object positioning optimization method and vehicle cooperative positioning device
US10552695B1 (en) Driver monitoring system and method of operating the same
US10762363B2 (en) Road sign recognition for connected vehicles
CN112612287A (en) System, method, medium and device for planning local path of automatic driving automobile
US20200189459A1 (en) Method and system for assessing errant threat detection
US20200286382A1 (en) Data-to-camera (d2c) based filters for improved object detection in images based on vehicle-to-everything communication
CN107316457B (en) Method for judging whether road traffic condition accords with automatic driving of automobile
US11150096B2 (en) Method and device for the localization of a vehicle based on a degree of robustness of the localization
US20220215197A1 (en) Data processing method and apparatus, chip system, and medium
CN113792707A (en) Terrain environment detection method and system based on binocular stereo camera and intelligent terminal
CN114037707A (en) Network bandwidth self-adaptive automatic driving point cloud data cooperative sensing system
CN113743709A (en) Online perceptual performance assessment for autonomous and semi-autonomous vehicles
CN114332494A (en) Three-dimensional target detection and identification method based on multi-source fusion under vehicle-road cooperation scene
CN113490178B (en) Intelligent networking vehicle multistage cooperative sensing system
US20230339492A1 (en) Generating and depicting a graphic of a phantom vehicle
CN114521001A (en) Network bandwidth self-adaptive automatic driving characteristic data cooperative sensing system
CN112309156A (en) Traffic light passing strategy based on 5G hierarchical decision
Xie et al. Soft Actor–Critic-Based Multilevel Cooperative Perception for Connected Autonomous Vehicles
WO2023036032A1 (en) Lane line detection method and apparatus
Shin et al. Compensation of wireless communication delay for integrated risk management of automated vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination