CN115578709B - Feature level cooperative perception fusion method and system for vehicle-road cooperation - Google Patents

Feature level cooperative perception fusion method and system for vehicle-road cooperation Download PDF

Info

Publication number
CN115578709B
CN115578709B CN202211480590.3A CN202211480590A CN115578709B CN 115578709 B CN115578709 B CN 115578709B CN 202211480590 A CN202211480590 A CN 202211480590A CN 115578709 B CN115578709 B CN 115578709B
Authority
CN
China
Prior art keywords
point cloud
end point
vehicle
feature
road
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211480590.3A
Other languages
Chinese (zh)
Other versions
CN115578709A (en
Inventor
刘前飞
高楠楠
黄文艺
孙超
王博
宋士佳
王文伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Automotive Research Institute of Beijing University of Technology
Original Assignee
Shenzhen Automotive Research Institute of Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Automotive Research Institute of Beijing University of Technology filed Critical Shenzhen Automotive Research Institute of Beijing University of Technology
Priority to CN202211480590.3A priority Critical patent/CN115578709B/en
Publication of CN115578709A publication Critical patent/CN115578709A/en
Application granted granted Critical
Publication of CN115578709B publication Critical patent/CN115578709B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

A feature level cooperative perception fusion method and system for vehicle-road cooperation relates to the technical field of vehicle automatic driving cooperative perception. Acquiring vehicle end point cloud information and a timestamp corresponding to the vehicle end point cloud information; performing feature extraction on the vehicle end point cloud information to generate a vehicle end point cloud pseudo image, and performing high-dimensional feature extraction on the vehicle end point cloud pseudo image to generate vehicle end point cloud feature spatial distribution; obtaining roadside endpoint cloud feature data, wherein the roadside endpoint cloud feature data are determined according to roadside endpoint cloud information stored according to the time stamp; decompressing the roadside endpoint cloud feature data, and mapping the decompressed roadside endpoint cloud feature data to the vehicle end point cloud feature space distribution by using feature space correction; and fusing the road side endpoint cloud feature data mapped to the vehicle end point cloud feature spatial distribution and the vehicle end point cloud feature spatial distribution, and processing the fused feature information so as to realize vehicle road cooperative sensing.

Description

Feature level cooperative perception fusion method and system for vehicle-road cooperation
Technical Field
The invention relates to the technical field of automatic driving cooperative perception of vehicles, in particular to a feature level cooperative perception fusion method and system for vehicle-road cooperation.
Background
With the increasingly deep research on the automatic driving technology of the vehicle, the research on the automatic driving cooperative perception technology based on vehicle-road cooperation gradually increases. The automatic driving automobile can sense the surrounding environment information through a sensor arranged on the automatic driving automobile, and can also acquire the state information of the communication participants on the road through sensors fixedly arranged on the two sides of the road. Perception information from two different sources of the vehicle end and the roadside traffic facility is fused with each other, so that the cooperative perception of the vehicle in a long distance and without blind areas can be realized, and the safety of the vehicle in automatic driving is improved and guaranteed.
At present, an automatic driving cooperative perception technology based on vehicle-road cooperation mainly takes a target-level information fusion scheme as a main point, namely, a vehicle and a road side facility respectively carry out 3D detection on a target on a road, and then obtained 3D target information is fused. However, the scheme has a big problem that the time synchronization consistency of the information of the vehicle end and the roadside end is poor, so that the perception performance after fusion is obviously different from the expectation.
Disclosure of Invention
The invention mainly solves the technical problems that: the time synchronization consistency of the vehicle end information and the roadside end information is poor.
According to a first aspect, an embodiment provides a feature-level collaborative awareness fusion method for vehicle-road collaboration, including:
acquiring vehicle end point cloud information and a timestamp corresponding to the vehicle end point cloud information;
performing feature extraction on the vehicle end point cloud information to generate a vehicle end point cloud pseudo image, and performing high-dimensional feature extraction on the vehicle end point cloud pseudo image to generate vehicle end point cloud feature spatial distribution;
obtaining roadside endpoint cloud feature data, wherein the roadside endpoint cloud feature data are determined according to roadside endpoint cloud information stored according to the time stamp;
decompressing the roadside endpoint cloud feature data, and mapping the decompressed roadside endpoint cloud feature data to the vehicle end point cloud feature space distribution by using feature space correction;
and fusing the road side endpoint cloud characteristic data mapped to the vehicle end point cloud characteristic spatial distribution and the vehicle end point cloud characteristic spatial distribution to realize vehicle road cooperative sensing.
In one embodiment, the performing feature extraction on the road-side end information to generate a vehicle-end point cloud pseudo image, and performing high-dimensional feature extraction on the vehicle-end point cloud pseudo image to generate vehicle-end point cloud feature spatial distribution includes:
performing feature extraction on the road side end information by using a point cloud object detection network to generate a vehicle end point cloud pseudo-image, and performing high-dimensional feature extraction on the vehicle end point cloud pseudo-image to generate vehicle end point cloud feature spatial distribution of 6C 0.5H 0.5W; wherein C is the number of channels; h is height; w is the width.
In an embodiment, the determining, by the roadside endpoint cloud feature data according to the roadside endpoint cloud information stored according to the timestamp, includes:
the road side end point cloud feature data is subjected to feature extraction on road side end point cloud information stored according to the timestamp, high-dimensional feature extraction and dimension reduction compression, and then the road side end point cloud feature data is 2C 0.5H 0.5W, wherein C is the number of channels; h is the height; w is the width.
In an embodiment, the decompressing the roadside endpoint cloud feature data includes:
decompressing the roadside endpoint cloud feature data by using a set convolution core of a set channel to generate decompressed roadside endpoint cloud feature data of 6C 0.5H 0.5W; wherein C is the number of channels; h is the height; w is the width.
In an embodiment, the mapping the road side endpoint cloud feature data after the dimension increase to the vehicle end point cloud feature space distribution by using feature space rectification includes:
predicting and calculating transformation parameters of road side endpoint cloud feature data mapped to vehicle end point cloud feature space distribution by using a 2D convolutional neural network;
determining the corresponding position coordinates of the road-side end point cloud characteristic data in the vehicle-end point cloud characteristic space distribution according to the transformation parameters;
and determining a corresponding characteristic value of the road side endpoint cloud characteristic data according to the position coordinates so as to complete the mapping of the road side endpoint cloud characteristic data to the vehicle end point cloud characteristic space distribution.
In an embodiment, the predicting and calculating a transformation parameter of the road side endpoint cloud feature data mapped to the vehicle end point cloud feature spatial distribution by using the 2D convolutional neural network includes:
determining the characteristic space conversion parameter by using a convolutional neural network through the following formula:
Figure DEST_PATH_IMAGE001
wherein the content of the first and second substances,A θ in order to transform the parameters of the image,θ 11 converting the road side point cloud characteristic data into a first conversion parameter of vehicle end point cloud characteristic space distribution,θ 12 converting the road side point cloud characteristic data into a second conversion parameter of vehicle end point cloud characteristic space distribution,θ 13 converting the road side point cloud characteristic data into a third conversion parameter of vehicle end point cloud characteristic space distribution,θ 21 converting the roadside point cloud characteristic data into a fourth conversion parameter of vehicle end point cloud characteristic space distribution,θ 22 converting the road side point cloud characteristic data into a fifth conversion parameter of vehicle end point cloud characteristic space distribution,θ 23 and converting the road side point cloud characteristic data into a sixth conversion parameter of vehicle end point cloud characteristic space distribution.
In an embodiment, the determining, by using the transformation parameter, a corresponding position coordinate of the roadside endpoint cloud feature data in the vehicle end point cloud feature spatial distribution includes:
calculating the corresponding position coordinates of the road side endpoint cloud feature data in the vehicle end point cloud feature space distribution by using the following formula:
Figure DEST_PATH_IMAGE003
wherein the content of the first and second substances,
Figure 421948DEST_PATH_IMAGE004
element coordinates of each element in the vehicle end point cloud characteristic space distribution; />
Figure DEST_PATH_IMAGE005
Corresponding position coordinates of the road side endpoint cloud feature data in the vehicle end point cloud feature space distribution;θ 11 converting the road side point cloud characteristic data into a first conversion parameter of vehicle end point cloud characteristic space distribution,θ 12 converting the road side point cloud characteristic data into a second conversion parameter of vehicle end point cloud characteristic space distribution,θ 13 converting the road side point cloud characteristic data into a third conversion parameter of vehicle end point cloud characteristic space distribution,θ 21 converting the roadside point cloud characteristic data into a fourth conversion parameter of vehicle end point cloud characteristic space distribution,θ 22 converting the road side point cloud characteristic data into a fifth conversion parameter of vehicle end point cloud characteristic space distribution,θ 23 and converting the road side point cloud characteristic data into a sixth conversion parameter of vehicle end point cloud characteristic space distribution. />
In one embodiment, determining a corresponding characteristic value of the road side endpoint cloud characteristic data according to the position coordinates to complete mapping of the road side endpoint cloud characteristic data to vehicle end point cloud characteristic space distribution, includes:
and determining the characteristic value of the road side endpoint cloud characteristic data at the coordinate position of the vehicle end point cloud characteristic space distribution by using a bilinear interpolation method.
According to a second aspect, an embodiment provides a feature-level collaborative awareness fusion system for vehicle-road collaboration, including:
the vehicle end calculating unit is used for acquiring vehicle end point cloud information and a timestamp corresponding to the vehicle end point cloud information, performing feature extraction on the vehicle end point cloud information to generate a vehicle end pseudo image, and performing high-dimensional feature extraction on the vehicle end pseudo image to generate vehicle end point cloud feature spatial distribution;
the road side end computing unit is used for acquiring road side end point cloud information and storing the road side end point cloud information according to a time stamp corresponding to the vehicle end point cloud information to generate road side end information;
the road side end computing unit is also used for extracting the characteristics of the road side end information to generate a road side end pseudo image, and performing high-dimensional characteristic extraction and then performing dimension reduction compression on the road side end pseudo image to generate road side end point cloud characteristic data;
the vehicle end computing unit acquires the road side endpoint cloud feature data, decompresses the road side endpoint cloud feature data, and maps the decompressed road side endpoint cloud feature data to the vehicle end point cloud feature spatial distribution by using feature space correction;
and the vehicle end computing unit fuses the vehicle end point cloud characteristic spatial distribution and the roadside end point cloud characteristic data mapped to the vehicle end point cloud characteristic spatial distribution so as to realize vehicle-road cooperation.
According to a third aspect, an embodiment provides a computer-readable storage medium having a program stored thereon, the program being executable by a processor to implement the above-mentioned method.
According to the feature level collaborative perception fusion method and system for vehicle-road collaboration and the computer-readable storage medium of the embodiment, after vehicle-end point cloud information and timestamps corresponding to the vehicle-end point cloud information are obtained, roadside end point cloud information is obtained. And storing the roadside endpoint cloud information according to the corresponding time stamp to generate roadside end information, performing high-dimensional feature extraction on the vehicle end point cloud information to generate vehicle end point cloud feature spatial distribution, and simultaneously performing feature extraction and compression on the roadside end point cloud information to generate roadside endpoint cloud feature data. Decompressing the road side point cloud feature data, mapping the decompressed road side end point cloud feature data to vehicle side point cloud feature spatial distribution, fusing the road side end point cloud feature data and the vehicle side end point cloud feature spatial distribution, and processing fused feature information so as to realize cooperative vehicle and road sensing. According to the method and the device, the vehicle end point cloud information and the roadside end point cloud information are synchronized in time and space, the characteristic space difference caused by the fact that the road end and the vehicle end are inconsistent at the data acquisition time is eliminated, and the alignment of the vehicle end and the roadside end point cloud characteristic space is realized.
Drawings
FIG. 1 is a technical flow chart of a target-level converged vehicle-road cooperative sensing scheme;
FIG. 2 is a first flowchart of a feature level perception fusion method of vehicle-road coordination according to an embodiment;
FIG. 3 is a flowchart illustrating a feature-level collaborative awareness fusion method for vehicle-to-road collaboration according to an embodiment;
FIG. 4 is a diagram of the dimension-up of the 1*1 convolutional neural core of an embodiment;
FIG. 5 is a diagram illustrating the dimensionality reduction of the 1*1 convolutional neural network according to one embodiment;
FIG. 6 is a second flowchart of a feature-level perceptual fusion method of vehicle-road collaboration according to an embodiment;
FIG. 7 is a diagram of bilinear interpolation according to an embodiment;
FIG. 8 is a schematic diagram of a final detection result obtained after point cloud feature fusion according to an embodiment;
fig. 9 is a schematic diagram of a feature level perception fusion system for vehicle-road coordination according to an embodiment.
Detailed Description
The present invention will be described in further detail with reference to the following detailed description and accompanying drawings. Wherein like elements in different embodiments have been given like element numbers associated therewith. In the following description, numerous details are set forth in order to provide a better understanding of the present application. However, those skilled in the art will readily recognize that some of the features may be omitted or replaced with other elements, materials, methods in different instances. In some instances, certain operations related to the present application have not been shown or described in detail in order to avoid obscuring the core of the present application from excessive description, and it is not necessary for those skilled in the art to describe these operations in detail, so that they may be fully understood from the description in the specification and the general knowledge in the art.
Furthermore, the features, operations, or characteristics described in the specification may be combined in any suitable manner to form various embodiments. Also, the various steps or actions in the description of the methods may be transposed or transposed in order, as will be apparent to a person skilled in the art. Thus, the various sequences in the specification and drawings are for the purpose of describing certain embodiments only and are not intended to imply a required sequence unless otherwise indicated where such sequence must be followed.
The numbering of the components as such, e.g., "first", "second", etc., is used herein only to distinguish the objects as described, and does not have any sequential or technical meaning. The term "connected" and "coupled" as used herein includes both direct and indirect connections (couplings), unless otherwise specified.
For cooperative sensing of a vehicle and a road, a cooperative sensing scheme of a target level fusion (the target level fusion refers to the fusion of prediction results of various modal models and the final decision making) has been disclosed, and the main idea is that laser radars at a road side end and a vehicle end respectively perform 3D detection and tracking on targets around a vehicle, and then fuse two types of target information to realize the final target detection. The method comprises the following three steps.
The method comprises the following steps: road side end sensing-laser point cloud 3D target detection.
The roadside end sensing means that the roadside facility can detect all traffic participants (such as pedestrians and vehicles) on the road in real time by installing devices such as high-definition cameras, sensors such as laser radars and computing units on traffic poles on two sides of the road.
Referring to fig. 1, as shown in the upper part of the technical flowchart of the target-level fused vehicle-road cooperative sensing scheme, first, a road-side computing unit obtains original point cloud data through a high-beam lidar. And then, extracting the features of the point cloud data to obtain a corresponding feature matrix. And then, detecting the obtained feature matrix, wherein the specific operation mainly comprises regression and classification, and respectively obtaining the 3D bounding boxes, the orientations and the target classes of all targets. And continuously tracking the moving target according to the motion characteristics of the target so as to obtain the speed information of the moving target. And finally, transmitting the obtained detection information (including the position, the orientation, the category, the speed and the like) of all the targets from the Road Side calculation Unit to a communication Unit RSU (Road Side Unit) and broadcasting the detection information outwards.
Step two: vehicle end sensing-laser point cloud 3D target detection.
The vehicle-end sensing means that the automatic driving vehicle detects traffic participants (such as pedestrians) around the automatic driving vehicle through existing sensor devices such as a camera and a laser radar on the automatic driving vehicle. However, the vehicle-end sensing is easily blocked by other surrounding obstacles, and a sensing blind area in a certain range is easily formed.
Referring to the lower part area of the technical flow chart of the target-level fused vehicle-road cooperative sensing scheme in fig. 1, a vehicle-end laser point cloud 3D target detection process is basically consistent with road-side end sensing, and a vehicle-end computing unit needs to obtain a 3D boundary frame, an orientation, a target category and speed information of targets around a vehicle through four basic steps of original point cloud data information acquisition, feature extraction, 3D target detection and tracking. Only the target information obtained by the vehicle-end computing unit does not need to be transmitted to the outside.
Step three: vehicle end perception-vehicle end and road side end 3D target information fusion.
Referring to fig. 1, shown in a lower part of a technical flowchart of a target-level integrated vehicle-road cooperative sensing scheme, first, a vehicle-side communication Unit OBU (On board Unit) continuously obtains target information data from a road-side communication Unit RSU, and transmits the data to a vehicle-side computing Unit. Then, the vehicle end calculating unit can simultaneously acquire the 3D target information of the road side end and the vehicle end. And finally, fusing the 3D target information of the two to obtain final target information, and outputting the final target information as a sensing result.
The multi-sensor information fusion is a main implementation method for automatic driving cooperative sensing based on vehicle-road cooperation (vehicle-road cooperative sensing means that cooperative sensing of the surrounding environment of a vehicle is achieved through sensing information at the side end of a combined road and sensing information at the side end of the combined road), for example, the ambient information around the vehicle is mainly obtained by the vehicle, a blind area around the vehicle and remote ambient information are mainly obtained by side end facilities). Because the data link is simple, and the V2X communication (V2X communication means that a roadside end is provided with a communication unit device RSU for data transmission, an autonomous driving vehicle end is provided with a communication unit device OBU for data reception, the sensing information of the roadside end can be broadcast and transmitted through the RSU device, and the OBU device of the autonomous driving vehicle end can continuously receive the information transmitted from the roadside end, and further transmit the information into a controller of the vehicle), the bandwidth requirement is low, so the current multi-sensor information fusion method for vehicle-road cooperative sensing mainly adopts target-level information fusion, and the specific technical flow is the above-mentioned vehicle-road cooperative sensing scheme with target-level fusion. However, the biggest defect of the scheme is that the time synchronization consistency of the information of the vehicle end and the roadside end is poor, so that the perception performance after fusion is obviously different from the expectation. The disadvantages of the target-level fused vehicle-road cooperative sensing scheme are mainly embodied in two aspects:
in the first aspect, the target-level fused vehicle-road cooperative sensing scheme requires that information from two different sources, namely a road side end and a vehicle end, is not allowed to have a large time interval, otherwise the fused performance is greatly damaged. However, since the roadside sensing device needs a certain time for detecting the 3D target, and the time consumption for transmitting the road-side RSU to the vehicle-side OBU data causes a larger time interval for the sensing information of the road-side received by the autonomous vehicle compared with the sensing information detected by the vehicle-side sensor. If this time and interval exceeds a certain threshold, the roadside data information will no longer be of practical significance for the autonomous vehicle.
In the second aspect, the acquisition of the original environment information by the vehicle end and the road side end is not synchronously triggered, and the acquisition of the original point cloud data by the laser radars of the vehicle end and the road side end is not at the same time, so that the same target is not aligned and unified in the space of the vehicle end and the road side end, and some position differences exist all the time. The performance of sensor information fusion between the roadside end and the vehicle end is often poorer than the perception performance of the pure vehicle end.
Therefore, how to improve the time synchronization consistency of the vehicle end and the roadside end information, reduce the time difference of the vehicle end and the roadside end perception information, simultaneously realize the alignment unification of the original data information on two spaces of the roadside end and the vehicle end, and further improve the cooperative perception performance of the vehicle and the road cooperation automatic driving.
Referring to fig. 2, in some embodiments of the present application, a feature level perception fusion method for vehicle-road cooperation is provided, which specifically includes the following steps.
Step S100: and acquiring vehicle end point cloud information and a timestamp corresponding to the vehicle end point cloud information.
Referring to fig. 3, in some embodiments, when the vehicle-end computing unit obtains one frame of vehicle-end point cloud information, the automatic driving vehicle transmits positioning information of the vehicle and a timestamp of a current time to the vehicle-end communication unit OBU through a high-precision positioning system in the automatic driving vehicle, where the positioning information includes longitude and latitude coordinates and vehicle body posture orientation angle information. The vehicle-end communication unit OBU broadcasts the vehicle-end point cloud information, the vehicle positioning information and the corresponding time stamps to all road-side ends, and road-side end facilities near the automatic driving vehicle can be guaranteed to receive the vehicle-end point cloud information, the positioning information of the vehicle and the corresponding time stamps. In some embodiments, the vehicle-end point cloud information refers to point cloud information of an obstacle acquired by a vehicle end.
Step S200: and performing feature extraction on the vehicle end point cloud information to generate a vehicle end pseudo image, and performing high-dimensional feature extraction on the vehicle end pseudo image to generate vehicle end point cloud feature spatial distribution.
In some embodiments, a point cloud object detection network is used for extracting features of roadside end information to generate a vehicle end pseudo-image, and high-dimensional feature extraction is performed on the vehicle end pseudo-image to generate vehicle end point cloud feature spatial distribution of 6C 0.5H 0.5W; wherein C is the number of channels; h is the height; w is the width.
In some embodiments, the point cloud object detection network performs feature extraction on roadside information using a pointpilars network. The PointPillars network is a point cloud object Detection network and mainly comprises a Pillar Feature Net, a backhaul and a Detection Head on the architecture, wherein the Pillar Feature Net is used for converting point cloud data into a pseudo image, the backhaul utilizes a 2D convolutional neural network to extract features, and the Detection Head is used for carrying out 3D Detection on the extracted Feature data to obtain target information. The pointpilars network is widely adopted in the industry because of the small number of parameters and the fast network computing speed.
When the features of the vehicle-end point cloud information are extracted in step S200, the vehicle-end computing unit obtains a 3-dimensional tensor 1c × 1h × 1w, that is, a pseudo image, from the vehicle-end point cloud information through the pilar Feature Net subnetwork. And then, carrying out high-dimensional extraction on the pseudo image by using a Backbone sub-network, and finally outputting vehicle-end point cloud characteristic space distribution with the characteristic tensor size of 6C 0.5H 0.5W. Wherein C is the number of channels; h is height; w is the width.
Step S300: and acquiring road side endpoint cloud feature data.
In some embodiments, the roadside endpoint cloud feature data is determined by performing dimension reduction compression after performing high-dimensional feature extraction on a pseudo image generated by corresponding roadside endpoint cloud information stored according to a timestamp corresponding to vehicle end point cloud information. And generating a pseudo image by the point cloud information at the roadside end through the point cloud object detection network, and generating a roadside end point cloud feature space of 6C 0.5H 0.5W after high-dimensional extraction. Performing dimensionality reduction compression on the roadside endpoint cloud feature space through a first set convolution core of a first set channel, and determining the roadside endpoint cloud feature data of 2C 0.5H 0.5W; wherein C is the number of channels; h is the height; w is the width.
And the road side end calculation unit acquires road side end point cloud information in real time by using the installed high-beam laser radar, and caches the road side end point cloud information according to the timestamp corresponding to the vehicle end point cloud information to generate the road side end information. Under the condition of such cache, the roadside end information simultaneously comprises the vehicle end point cloud information and the timestamp information corresponding to the vehicle end point cloud information. And according to the time stamp corresponding to the acquired vehicle end point cloud information, capturing a frame of road side end information with the time closest to the vehicle end point cloud information from the road side end information as input data.
The road side end point cloud information is stored according to the vehicle end point cloud information, so that the vehicle end point cloud information and the road side end point cloud information are basically kept consistent in a time dimension. Assuming laser radar operating frequencyfAt 20Hz, the maximum original time difference between the vehicle end point cloud information and the road side end point cloud information during data acquisition can be ensuredT max =1/f =50ms。
In some embodiments, after the roadside end acquires the positioning information of the vehicle, the roadside end information is converted from the roadside end coordinate system to the corresponding coordinate system of the autonomous vehicle, so that the vehicle end point cloud information and the roadside end information can be fused conveniently.
In some embodiments, when the point cloud Feature data of the roadside end is obtained in step S300, the roadside end calculating unit obtains a 3-dimensional tensor with a size of 1c × 1h × 1w, that is, a pseudo image, from the roadside end information through the pilar Feature Net subnetwork. And then, performing high-dimensional extraction on the pseudo image by using a Backbone sub-network, and finally outputting a road side endpoint cloud feature space with the feature tensor size of 6C 0.5H 0.5W. Wherein C is the number of channels; h is the height; w is the width.
In some embodiments, in order to reduce the bandwidth occupied by the roadside endpoint cloud feature space from the roadside end to the vehicle end, and shorten the transmission time of data from the roadside end RSU to the vehicle end OBU, necessary compression is performed on feature data in the roadside end point cloud feature space. 2*C 1*1 convolution is adopted to check the point cloud characteristic space for dimensionality reduction, and therefore the purpose of data characteristic compression is achieved. The feature tensor data in the roadside end point cloud feature space is changed into the roadside end point cloud feature data of 2c × 0.5h × 0.5w through the 1*1 convolution kernel.
Referring to fig. 4 and 5, the convolution kernel 1 x 1 is actually a linear combination (information integration) performed on different channels for each point cloud feature point, and the original planar structure of the point cloud feature is retained. In step S300, the depth of feature output is regulated by the number of 1*1 convolution kernels, so as to implement the function of increasing the dimension (as shown in fig. 4) or decreasing the dimension (as shown in fig. 5) of the feature data. If the number of 1*1 convolution nerve cores exceeds the number of channels of the original point cloud characteristics, dimension increasing can be carried out on the point cloud characteristic data, and therefore feature data decompression is achieved. Similarly, if the 1*1 number of convolution nerve cores is less than the number of channels of the original point cloud feature, the dimension reduction can be performed on the point cloud feature data, that is, the feature data can be compressed.
After the road side end point cloud feature data are obtained, the vehicle end computing unit can quickly receive the road side end point cloud feature data through information broadcast from the road side end RSU to the vehicle end OBU.
Step S400: and decompressing the road side point cloud characteristic data, and mapping the decompressed road side end point cloud characteristic data to vehicle end point cloud characteristic spatial distribution by utilizing characteristic spatial correction.
In some embodiments, after the vehicle-end computing unit receives the roadside endpoint cloud feature data sent by the roadside end, necessary data preprocessing must be performed first. The data preprocessing comprises the steps of decompressing and dimension-increasing the road side end point cloud feature data, wherein the decompressing and dimension-increasing of the road side end point cloud feature data is mainly used for recovering original feature data information of a road side end.
In some embodiments, the roadside end point cloud feature data is subjected to dimensionality increase by using a second set convolution kernel of a second set channel to generate road side end point cloud feature data after dimensionality increase of 6C × 0.5H × 0.5W; wherein C is the number of channels; h is the height; w is the width.
In some embodiments, the data preprocessing further comprises performing feature space rectification on the road side endpoint cloud feature data after the dimension increase. The feature space correction is mainly used for performing necessary correction on road side point cloud feature data in a vehicle coordinate space, so that the road side point cloud feature data and vehicle end point cloud information are aligned in the vehicle end space. Only the road side point cloud feature data after being preprocessed can be fused with vehicle end point cloud information.
In some embodiments, although the vehicle-end point cloud information and the roadside endpoint cloud feature data ensure that the point cloud data from two different sources are substantially consistent in the time dimension through the steps S100 and S300, a certain error (< 50 ms) always exists between the two in the time dimension of data acquisition. Due to this time delay, the initial position and attitude of a moving object (such as a vehicle) tend to have a certain offset and rotation error in two spaces, the vehicle end and the roadside end. Therefore, in order to eliminate this error, necessary spatial correction is performed on the road-side end point cloud feature data. Referring to fig. 6, the method specifically includes the following steps.
Step S401: and predicting and calculating transformation parameters of the road side endpoint cloud characteristic data mapped to vehicle end point cloud characteristic space distribution by using the 2D convolutional neural network.
In some embodiments, the feature space transformation parameters are mainly used to calculate parameters of the spatial transformation. The conversion from the road side point cloud characteristic data space to the vehicle end point cloud characteristic space distribution is mainly translation and rotation operation. Spatial transformation parameters are shown belowA θ Expressed as a matrix with 2 rows and 3 columns, and the characteristic space conversion parameters are obtained by adopting a deep learning method of a 2D convolutional neural networkA θ . The convolutional neural network module is composed of 3 convolutional layers, 1 full-connection layer and 1 logistic regression layer in sequence.
Figure 73509DEST_PATH_IMAGE001
Wherein, the first and the second end of the pipe are connected with each other,A θ in order to transform the parameters of the image,θ 11 converting the road side point cloud characteristic data into a first conversion parameter of vehicle end point cloud characteristic space distribution,θ 12 converting the road side point cloud characteristic data into a second conversion parameter of vehicle end point cloud characteristic space distribution,θ 13 converting the road side point cloud characteristic data into a third conversion parameter of vehicle end point cloud characteristic space distribution,θ 21 converting the road side point cloud characteristic data into a fourth conversion parameter of vehicle end point cloud characteristic space distribution,θ 22 for converting road side point cloud characteristic data into vehicle end point cloud characteristic spatial distributionThe fifth one of the conversion parameters is,θ 23 and converting the road side point cloud characteristic data into a sixth conversion parameter of vehicle end point cloud characteristic space distribution.
Step S402: and determining the corresponding position coordinates of the road-side end point cloud characteristic data in the vehicle-end point cloud characteristic space distribution according to the transformation parameters.
In some embodiments, the feature space transformation parameters obtained in step S401 are used as the basisA θ And calculating the corresponding position coordinates of the road side end point cloud characteristic data in the vehicle end point cloud characteristic space distribution by traversing the coordinates of each element in the vehicle end point cloud characteristic space distribution and utilizing the following formula, and storing the position coordinates.
Figure DEST_PATH_IMAGE007
Wherein the content of the first and second substances,
Figure 977880DEST_PATH_IMAGE004
element coordinates of each element in the vehicle end point cloud characteristic space distribution; />
Figure 310772DEST_PATH_IMAGE005
Corresponding position coordinates of the road side endpoint cloud feature data in vehicle end point cloud feature space distribution;θ 11 converting the road side point cloud characteristic data into a first conversion parameter of vehicle end point cloud characteristic space distribution,θ 12 converting the road side point cloud characteristic data into a second conversion parameter of vehicle end point cloud characteristic space distribution,θ 13 converting the road side point cloud characteristic data into a third conversion parameter of vehicle end point cloud characteristic space distribution,θ 21 converting the road side point cloud characteristic data into a fourth conversion parameter of vehicle end point cloud characteristic space distribution,θ 22 converting the road side point cloud characteristic data into a fifth conversion parameter of vehicle end point cloud characteristic space distribution,θ 23 and converting the road side point cloud characteristic data into a sixth conversion parameter of vehicle end point cloud characteristic space distribution.
Step S403: and determining a corresponding characteristic value of the road side endpoint cloud characteristic data according to the position coordinates so as to complete the mapping of the road side endpoint cloud characteristic data to the vehicle end point cloud characteristic space distribution.
In some embodiments, a bilinear interpolation method is used for determining the characteristic value of the roadside endpoint cloud characteristic data at the coordinate position of the vehicle end point cloud characteristic space distribution.
In some embodiments, it is assumed that the coordinates corresponding to the road side endpoint cloud feature data calculated in step S402 are obtained
Figure 46516DEST_PATH_IMAGE005
All are integers, the corresponding feature value in the road side endpoint cloud feature data can be directly assigned to the point cloud feature space distribution coordinate on the vehicle end>
Figure 603399DEST_PATH_IMAGE004
Corresponding feature elements. However, the coordinates corresponding to the road side endpoint cloud feature data obtained in step S402 are ≥ er>
Figure 57383DEST_PATH_IMAGE005
Usually a decimal number. Therefore, a bilinear interpolation method is required to calculate and obtain a characteristic value corresponding to the vehicle-end point cloud characteristic space distribution coordinate. Referring to fig. 7, the basic idea of bilinear interpolation is to estimate the feature value of a point by using the feature values of four points around the point. Characteristic pointPThe coordinates on the X, Y axis are all fractional numbers and are knownPFour integer coordinate points around the pointQ 11Q 12Q 21 AndQ 22 the bilinear interpolation method uses the following formula to calculate pointsPCorresponding characteristic values:
Figure DEST_PATH_IMAGE009
wherein the content of the first and second substances,Pis characterized by the following points (xy) Is composed ofPThe coordinates of the points are such that,Q 11Q 21Q 12 andQ 22 are respectively asPFour adjacent to each otherA dot of (A)x 1y 1 ) Is composed ofQ 11 Coordinates of points, (x 1y 2 ) Is composed ofQ 12 Coordinates of points: (x 2y 1 ) Is composed ofQ 21 Coordinates of points: (x 2y 2 ) Is composed ofQ 22 The coordinates of the points are such that,fQ 11 )、fQ 21 )、fQ 12 ) AndfQ 22 ) Are respectively asQ 11Q 21Q 12 AndQ 22 the corresponding characteristic value.
Step S500: and fusing the roadside end point cloud characteristic data mapped to the vehicle end point cloud characteristic spatial distribution and the vehicle end point cloud characteristic spatial distribution to realize vehicle-road cooperative perception.
In some embodiments, the vehicle-end point cloud feature spatial distribution and the roadside end point cloud feature data mapped to the vehicle-end point cloud feature spatial distribution are cascaded in channel dimensions to obtain a new fusion feature. The size of the point cloud feature matrix after the final vehicle end and the road side end are fused is 12C 0.5H 0.5W, wherein C is the number of channels; h is the height; w is the width.
In some embodiments, after the vehicle-road fusion is realized, the Detection Head is used for carrying out 3D Detection on the fused point cloud characteristics to obtain the centroid coordinates, the outline dimensions and the target orientation and category information of the 3D target. Please refer to fig. 8, which is a schematic diagram of a final detection result obtained after point cloud feature fusion.
Referring to fig. 9, in some embodiments of the present application, a vehicle-road cooperative feature level perception fusion system 800 is provided, which includes a vehicle-end calculating unit 810 and a road-side calculating unit 820, which are described in detail below.
The vehicle end calculating unit 810 is configured to obtain vehicle end point cloud information and a timestamp corresponding to the vehicle end point cloud information. And after feature extraction is performed on the vehicle-end point cloud information to generate a vehicle-end pseudo image, the vehicle-end computing unit 810 performs high-dimensional feature extraction on the vehicle-end pseudo image to generate vehicle-end point cloud feature spatial distribution.
The roadside end calculation unit 820 is configured to acquire roadside end point cloud information and store the roadside end point cloud information according to a timestamp corresponding to the vehicle end point cloud information to generate roadside end information. The roadside end computing unit 820 further performs feature extraction on the roadside end information to generate a roadside end pseudo image, performs high-dimensional feature extraction on the roadside end pseudo image, and performs dimension reduction compression to generate roadside end point cloud feature data.
The vehicle-end computing unit 810 acquires road-side end point cloud feature data, decompresses the road-side end point cloud feature data, and maps the decompressed road-side end point cloud feature data to vehicle-end point cloud feature space distribution by using feature space correction.
The vehicle end computing unit 810 fuses vehicle end point cloud feature spatial distribution and road side end point cloud feature data mapped to the vehicle end point cloud feature spatial distribution to achieve vehicle road cooperative sensing.
The method establishes a characteristic level collaborative perception fusion framework oriented to vehicle-road collaboration, wherein the vehicle end and the road side end adopt the PointPillars network with the best real-time performance to extract the point cloud characteristics. Firstly, the characteristic-level data fusion is adopted, so that the optimization from the vehicle end to the road side end is more suitable for the whole neural network. The network of car end and roadside end can train and optimize as a whole, can maximize like this and combine the advantage of two kinds of different source data information of roadside and car end, promotes 3D target detection's performance. And secondly, compared with target-level data fusion, the data fusion based on the characteristic level can receive the data transmitted by the road side end in a shorter time by the vehicle end, so that the timeliness of the data fusion is ensured. Therefore, the defect that target-level information fusion cannot be effectively fused due to overhigh transmission delay is overcome.
According to the method and the device, the point cloud characteristics are checked by utilizing the convolution of 1*1 to perform data compression and data decompression, so that the data volume transmitted from the side end to the vehicle end can be fully reduced, the bandwidth requirements on the road side end communication unit RSU and the vehicle end communication unit OBU are reduced, the data processing speed is greatly improved, and the real-time performance of algorithm processing is ensured. Although the point cloud information of the vehicle end and the road side end from two different sources are basically consistent at the initial moment of data acquisition, a certain error always exists between the two point cloud information in the time dimension of data acquisition. According to the method, the feature space correction is carried out on the road side endpoint cloud feature data after the dimension is increased by using the convolutional neural network, and the road side point cloud feature data is restored to the state of the vehicle end point cloud data at the moment to obtain the state. Therefore, the spatial consistency of the two data is ensured, certain time delay robustness is achieved when the two data from different sources are fused, and the perception performance of target detection after feature fusion is indirectly improved.
Those skilled in the art will appreciate that all or part of the functions of the various methods in the above embodiments may be implemented by hardware, or may be implemented by computer programs. When all or part of the functions of the above embodiments are implemented by a computer program, the program may be stored in a computer-readable storage medium, and the storage medium may include: a read only memory, a random access memory, a magnetic disk, an optical disk, a hard disk, etc., and the program is executed by a computer to realize the above functions. For example, the program may be stored in a memory of the device, and when the program in the memory is executed by the processor, all or part of the functions described above may be implemented. In addition, when all or part of the functions in the above embodiments are implemented by a computer program, the program may be stored in a storage medium such as a server, another computer, a magnetic disk, an optical disk, a flash disk, or a portable hard disk, and may be downloaded or copied to a memory of a local device, or may be version-updated in a system of the local device, and when the program in the memory is executed by a processor, all or part of the functions in the above embodiments may be implemented.
The present invention has been described in terms of specific examples, which are provided to aid understanding of the invention and are not intended to be limiting. For a person skilled in the art to which the invention pertains, several simple deductions, modifications or substitutions may be made according to the idea of the invention.

Claims (9)

1. A feature level collaborative perception fusion method for vehicle-road collaboration is characterized by comprising the following steps:
acquiring vehicle end point cloud information and a timestamp corresponding to the vehicle end point cloud information;
performing feature extraction on the vehicle end point cloud information to generate a vehicle end point cloud pseudo image, and performing high-dimensional feature extraction on the vehicle end point cloud pseudo image to generate vehicle end point cloud feature spatial distribution;
obtaining road side endpoint cloud feature data, wherein the road side endpoint cloud feature data are determined according to road side endpoint cloud information stored according to the time stamp;
predicting and calculating transformation parameters of the road-side point cloud characteristic data mapped to vehicle-end point cloud characteristic space distribution by using a 2D convolutional neural network;
determining corresponding position coordinates of the road-side end point cloud characteristic data in the vehicle-side end point cloud characteristic space distribution according to the transformation parameters;
determining a corresponding characteristic value of the road side endpoint cloud characteristic data according to the position coordinates so as to complete mapping of the road side endpoint cloud characteristic data to vehicle end point cloud characteristic space distribution;
and fusing the road side endpoint cloud characteristic data mapped to the vehicle end point cloud characteristic spatial distribution and the vehicle end point cloud characteristic spatial distribution to realize vehicle road cooperative sensing.
2. The feature-level collaborative perception fusion method for vehicle-road collaboration as claimed in claim 1, wherein the performing feature extraction on the vehicle-end information to generate a vehicle-end point cloud pseudo-image, and performing high-dimensional feature extraction on the vehicle-end point cloud pseudo-image to generate a vehicle-end point cloud feature spatial distribution comprises:
performing feature extraction on the vehicle end information by using a point cloud object detection network to generate a vehicle end point cloud pseudo image, and performing high-dimensional feature extraction on the vehicle end point cloud pseudo image to generate 6C 0.5H 0.5W vehicle end point cloud feature spatial distribution; wherein C is the number of channels; h is height; w is the width.
3. The feature-level collaborative awareness fusion method for vehicle-road collaboration according to claim 1, wherein the roadside endpoint cloud feature data determination based on roadside endpoint cloud information stored according to the timestamp includes:
the road side end point cloud feature data is subjected to feature extraction on road side end point cloud information stored according to the timestamp, high-dimensional feature extraction and dimension reduction compression, and then the road side end point cloud feature data is 2C 0.5H 0.5W, wherein C is the number of channels; h is the height; w is the width.
4. The feature-level collaborative awareness fusion method for vehicle-road collaboration according to claim 1, wherein the decompressing the roadside endpoint cloud feature data includes:
decompressing the roadside endpoint cloud feature data by using a set convolution core of a set channel to generate decompressed roadside endpoint cloud feature data of 6C × 0.5H × 0.5W; wherein C is the number of channels; h is the height; w is the width.
5. The feature level collaborative awareness fusion method for vehicle-road collaboration as claimed in claim 1, wherein the using 2D convolutional neural network to predict and calculate transformation parameters of road side endpoint cloud feature data mapping to vehicle end point cloud feature spatial distribution comprises:
using 2D convolutional neural network prediction to obtain the following formula to determine the transformation parameters:
Figure QLYQS_1
wherein the content of the first and second substances,A θ in order to transform the parameters of the image,θ 11 converting the road side point cloud characteristic data into a first conversion parameter of vehicle end point cloud characteristic space distribution,θ 12 converting the road side point cloud characteristic data into a second conversion parameter of vehicle end point cloud characteristic space distribution,θ 13 converting the road side point cloud characteristic data into a third conversion parameter of vehicle end point cloud characteristic space distribution,θ 21 converting the road side point cloud characteristic data into a fourth conversion parameter of vehicle end point cloud characteristic space distribution,θ 22 for roadside point cloudsConverting the characteristic data into a fifth conversion parameter of the vehicle end point cloud characteristic space distribution,θ 23 and converting the road side point cloud characteristic data into a sixth conversion parameter of vehicle end point cloud characteristic space distribution.
6. The method for feature-level collaborative awareness fusion of vehicle-road collaboration as claimed in claim 1, wherein the determining the corresponding position coordinates of the road side endpoint cloud feature data in the vehicle end point cloud feature spatial distribution by using the transformation parameters comprises:
calculating the corresponding position coordinates of the road side endpoint cloud feature data in the vehicle end point cloud feature space distribution by using the following formula:
Figure QLYQS_2
wherein the content of the first and second substances,
Figure QLYQS_3
element coordinates of each element in the vehicle end point cloud characteristic space distribution; />
Figure QLYQS_4
Corresponding position coordinates of the road side endpoint cloud feature data in the vehicle end point cloud feature space distribution;θ 11 converting the road side point cloud characteristic data into a first conversion parameter of vehicle end point cloud characteristic space distribution,θ 12 converting the road side point cloud characteristic data into a second conversion parameter of vehicle end point cloud characteristic space distribution,θ 13 converting the road side point cloud characteristic data into a third conversion parameter of vehicle end point cloud characteristic space distribution,θ 21 converting the roadside point cloud characteristic data into a fourth conversion parameter of vehicle end point cloud characteristic space distribution,θ 22 converting the road side point cloud characteristic data into a fifth conversion parameter of vehicle end point cloud characteristic space distribution,θ 23 and converting the road side point cloud characteristic data into a sixth conversion parameter of vehicle end point cloud characteristic space distribution.
7. The feature-level collaborative perception fusion method for vehicle-road collaboration as claimed in claim 1, wherein the step of determining corresponding feature values of the road-side endpoint cloud feature data according to the position coordinates to complete mapping of the road-side endpoint cloud feature data to vehicle-end point cloud feature spatial distribution includes:
and determining the characteristic value of the road side endpoint cloud characteristic data at the coordinate position of the vehicle end point cloud characteristic space distribution by using a bilinear interpolation method.
8. A feature level collaborative awareness fusion system for vehicle-road collaboration is characterized by comprising:
the vehicle end computing unit is used for acquiring vehicle end point cloud information and a timestamp corresponding to the vehicle end point cloud information, performing feature extraction on the vehicle end point cloud information to generate a vehicle end pseudo image, and performing high-dimensional feature extraction on the vehicle end pseudo image to generate vehicle end point cloud feature spatial distribution;
the road side end computing unit is used for acquiring road side end point cloud information and storing the road side end point cloud information according to a time stamp corresponding to the vehicle end point cloud information to generate road side end information;
the road side end computing unit is also used for extracting the characteristics of the road side end information to generate a road side end pseudo image, and performing high-dimensional characteristic extraction and then performing dimension reduction compression on the road side end pseudo image to generate road side end point cloud characteristic data;
the vehicle end computing unit acquires the road side endpoint cloud feature data and decompresses the road side endpoint cloud feature data: predicting and calculating transformation parameters of the road end point cloud characteristic data mapped to vehicle end point cloud characteristic space distribution by using a 2D convolutional neural network; determining the corresponding position coordinates of the road-side end point cloud characteristic data in the vehicle-end point cloud characteristic space distribution according to the transformation parameters; determining a corresponding characteristic value of the road side endpoint cloud characteristic data according to the position coordinates to complete the mapping of the road side endpoint cloud characteristic data to the vehicle end point cloud characteristic space distribution;
and the vehicle end computing unit fuses the vehicle end point cloud characteristic spatial distribution and the roadside end point cloud characteristic data mapped to the vehicle end point cloud characteristic spatial distribution so as to realize vehicle-road cooperative sensing.
9. A computer-readable storage medium, characterized in that the medium has stored thereon a program which is executable by a processor to implement the method according to any one of claims 1-7.
CN202211480590.3A 2022-11-24 2022-11-24 Feature level cooperative perception fusion method and system for vehicle-road cooperation Active CN115578709B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211480590.3A CN115578709B (en) 2022-11-24 2022-11-24 Feature level cooperative perception fusion method and system for vehicle-road cooperation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211480590.3A CN115578709B (en) 2022-11-24 2022-11-24 Feature level cooperative perception fusion method and system for vehicle-road cooperation

Publications (2)

Publication Number Publication Date
CN115578709A CN115578709A (en) 2023-01-06
CN115578709B true CN115578709B (en) 2023-04-07

Family

ID=84590555

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211480590.3A Active CN115578709B (en) 2022-11-24 2022-11-24 Feature level cooperative perception fusion method and system for vehicle-road cooperation

Country Status (1)

Country Link
CN (1) CN115578709B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114339681A (en) * 2022-03-10 2022-04-12 国汽智控(北京)科技有限公司 Cloud vehicle road cooperative processing method, system, equipment and storage medium
CN114495035A (en) * 2021-12-29 2022-05-13 中智行(上海)交通科技有限公司 Holographic data self-supervision learning method based on vehicle-road cooperation

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190361454A1 (en) * 2018-05-24 2019-11-28 GM Global Technology Operations LLC Control systems, control methods and controllers for an autonomous vehicle
CN111770451B (en) * 2020-05-26 2022-02-18 同济大学 Road vehicle positioning and sensing method and device based on vehicle-road cooperation
CN112462381A (en) * 2020-11-19 2021-03-09 浙江吉利控股集团有限公司 Multi-laser radar fusion method based on vehicle-road cooperation
CN114034316A (en) * 2021-10-29 2022-02-11 上海智能网联汽车技术中心有限公司 Positioning performance evaluation method and system of road side system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114495035A (en) * 2021-12-29 2022-05-13 中智行(上海)交通科技有限公司 Holographic data self-supervision learning method based on vehicle-road cooperation
CN114339681A (en) * 2022-03-10 2022-04-12 国汽智控(北京)科技有限公司 Cloud vehicle road cooperative processing method, system, equipment and storage medium

Also Published As

Publication number Publication date
CN115578709A (en) 2023-01-06

Similar Documents

Publication Publication Date Title
US11113959B2 (en) Crowdsourced detection, identification and sharing of hazardous road objects in HD maps
CN108574929B (en) Method and apparatus for networked scene rendering and enhancement in an onboard environment in an autonomous driving system
CN111554088B (en) Multifunctional V2X intelligent roadside base station system
US11214268B2 (en) Methods and apparatus for unsupervised multimodal anomaly detection for autonomous vehicles
US10445928B2 (en) Method and system for generating multidimensional maps of a scene using a plurality of sensors of various types
CN110325818B (en) Joint 3D object detection and orientation estimation via multimodal fusion
EP4152204A1 (en) Lane line detection method, and related apparatus
CN110796692A (en) End-to-end depth generation model for simultaneous localization and mapping
DE112019001657T5 (en) SIGNAL PROCESSING DEVICE AND SIGNAL PROCESSING METHOD, PROGRAM AND MOBILE BODY
US11035933B2 (en) Transition map between lidar and high-definition map
CN108645375B (en) Rapid vehicle distance measurement optimization method for vehicle-mounted binocular system
WO2022206414A1 (en) Three-dimensional target detection method and apparatus
CN114332494A (en) Three-dimensional target detection and identification method based on multi-source fusion under vehicle-road cooperation scene
WO2023185564A1 (en) Visual enhancement method and system based on multi-connected vehicle space alignment feature fusion
US10839522B2 (en) Adaptive data collecting and processing system and methods
US20210064872A1 (en) Object detecting system for detecting object by using hierarchical pyramid and object detecting method thereof
CN115578709B (en) Feature level cooperative perception fusion method and system for vehicle-road cooperation
CN112241963A (en) Lane line identification method and system based on vehicle-mounted video and electronic equipment
US20230162513A1 (en) Vehicle environment modeling with a camera
WO2018143278A1 (en) Image processing device, image recognition device, image processing program, and image recognition program
CN115359332A (en) Data fusion method and device based on vehicle-road cooperation, electronic equipment and system
US20210383134A1 (en) Advanced driver assist system and method of detecting object in the same
CN114898144A (en) Automatic alignment method based on camera and millimeter wave radar data
CN113221756A (en) Traffic sign detection method and related equipment
CN112649008A (en) Method for providing a digital positioning map for a vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant