CN115379408A - Scene perception-based V2X multi-sensor fusion method and device - Google Patents

Scene perception-based V2X multi-sensor fusion method and device Download PDF

Info

Publication number
CN115379408A
CN115379408A CN202211314591.0A CN202211314591A CN115379408A CN 115379408 A CN115379408 A CN 115379408A CN 202211314591 A CN202211314591 A CN 202211314591A CN 115379408 A CN115379408 A CN 115379408A
Authority
CN
China
Prior art keywords
scene
sensor
vehicle
target detection
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211314591.0A
Other languages
Chinese (zh)
Other versions
CN115379408B (en
Inventor
常琳
蒋华涛
杨昊
仲雪君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sirun Beijing Technology Co ltd
Original Assignee
Sirun Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sirun Beijing Technology Co ltd filed Critical Sirun Beijing Technology Co ltd
Priority to CN202211314591.0A priority Critical patent/CN115379408B/en
Publication of CN115379408A publication Critical patent/CN115379408A/en
Application granted granted Critical
Publication of CN115379408B publication Critical patent/CN115379408B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/38Services specially adapted for particular environments, situations or purposes for collecting sensor information
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/865Combination of radar systems with lidar systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Traffic Control Systems (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention relates to a scene perception-based V2X multi-sensor fusion method and a scene perception-based V2X multi-sensor fusion device. The method comprises the steps of acquiring sensor data acquired by each sensor, carrying out time synchronization on the sensor data, and then carrying out pretreatment to obtain data to be processed; comparing the data to be processed of each sensor pairwise to obtain a target detection correlation coefficient between the sensors; obtaining the confidence coefficient of each sensor based on the target detection correlation coefficient, and carrying out scene judgment based on the confidence coefficient; determining the type of the input sensor according to the judged scene, determining the weight of each input sensor according to a preset neural network, and fusing data acquired by the corresponding sensors to obtain a fusion result. According to the method and the device, the scene acquired by the sensors is processed, so that the problem of multi-sensor fusion in different scenes can be effectively solved, and the V2X plays a greater role in the intelligent networked automobile.

Description

Scene perception-based V2X multi-sensor fusion method and device
Technical Field
The invention belongs to the technical field of the automobile industry, and particularly relates to a scene perception-based V2X multi-sensor fusion method and device.
Background
With the development of the automobile industry to intellectualization and networking, the integration of V2X and bicycle intelligence is a necessary trend. In the prior art, V2X information is more simply used as supplement of single vehicle sensor information, and as described in published patent application CN114419591a, a C-V2X-based multi-sensor information fusion vehicle detection method is mainly supplemented by vehicle-vehicle communication in the technology of laser radar and camera fusion detection. Or as in published patent application CN109922439a, a method for fusing multi-sensor data is described, as long as the data collected by V2X and other sensors are filtered and prioritized, and then output as a fusion result. Or as described in published patent application document CN113947158a, the data fusion method and apparatus for an intelligent vehicle adopt kalman filtering to process the fusion data, and an XGBoost model solves the problem of uncertainty of multi-sensor data, without considering the influence of the environment on the sensor.
In summary, in the prior art, much attention is paid to the single-vehicle intelligent sensor, such as laser radar, camera, millimeter wave radar and the like, data fusion of the single-vehicle intelligent sensor is performed, V2X information is only used as a supplementary means or is used for judging perception detection delay, advantages and inferior quality of various sensors in different scenes are not fully considered, and the perception capability of the intelligent networked vehicle to the driving environment is lost.
Disclosure of Invention
In view of the above, the present invention aims to overcome the defects of the prior art, and provides a method and an apparatus for fusing multiple sensors in V2X based on scene sensing, so as to solve the problem in the prior art that the sensing capability of an intelligent networked automobile for the driving environment is lost due to insufficient consideration of the advantages and the inferior quality of various sensors in different scenes.
In order to achieve the purpose, the invention adopts the following technical scheme: a V2X multi-sensor fusion method based on scene perception comprises the following steps:
acquiring sensor data acquired by each sensor, and carrying out time synchronization on the sensor data; the sensor includes: the system comprises a vehicle-mounted OBU terminal, a vehicle-mounted navigation terminal, a laser radar, a camera and a millimeter wave radar;
preprocessing the sensor data after time synchronization to obtain data to be processed;
comparing the data to be processed of each sensor pairwise to obtain a target detection correlation coefficient between the two sensors;
obtaining the confidence coefficient of each sensor based on the target detection correlation coefficient, and performing scene judgment based on the confidence coefficient;
determining the type of the input sensor according to the judged scene, determining the weight of each input sensor according to a preset neural network, and fusing data collected by the corresponding sensors according to the weight to obtain a fusion result.
Further, the vehicle-mounted OBU terminal is configured to acquire V2X information, where the V2X information includes: vehicle ID, location information, GNSS accuracy, speed, direction, acceleration, vehicle size, vehicle type, and timestamp; the position information comprises longitude, latitude and height;
the vehicle-mounted navigation terminal is used for acquiring GNSS information, and the GNSS information comprises: longitude, latitude, altitude, and timestamp;
the lidar is configured to obtain lidar information, the lidar information including: angle, distance, reflection intensity, and timestamp;
the camera is used for obtaining camera information, the camera information includes: current frame image information and a timestamp;
the millimeter wave radar is used for obtaining millimeter wave radar information, and the millimeter wave radar information comprises: distance, angle, speed, and timestamp.
Further, carry out coordinate transformation to each sensor to unify the coordinate of each sensor for in the northeast sky coordinate system who uses two rear wheel centers of car as the original point, carry out the preliminary treatment to the sensor data after the time synchronization, include:
performing fusion processing on the V2X information and the GNSS information by adopting a Kalman filter to obtain a target detection result;
respectively obtaining a bird's-eye view of the laser radar information, the camera information and the millimeter wave radar information according to the laser radar information, the camera information and the millimeter wave radar information,
adopting a pre-trained CNN neural network to perform target detection on the aerial view of the laser radar information, the camera information and the millimeter wave radar information to respectively obtain target detection results of the laser radar, the camera and the millimeter wave radar;
determining all target detection results as data to be processed;
the target detection result of the laser radar is that the laser radar marks the target position, type and detection confidence in the aerial view; the target detection result of the camera is that the camera marks the position, type and detection confidence of the target in the aerial view; and the target detection result of the millimeter wave radar is that the millimeter wave radar marks the target position, type and detection confidence in the aerial view.
Further, the fusion processing of the V2X information and the GNSS information by using the kalman filter includes:
obtaining a bird's-eye view of V2X target detection according to V2X information acquired by a vehicle-mounted OBU terminal of a vehicle and V2X information of other vehicles acquired by vehicle-mounted OBU terminals of other vehicles; the bird's eye view includes the position of the vehicle and other vehicles in the bird's eye view, the size and relative orientation of the vehicles.
Further, the step of comparing the data to be processed of each sensor two by two to obtain a target detection correlation coefficient between the two sensors includes:
according to the acquired data to be processed, a target detection correlation coefficient from the vehicle-mounted OBU terminal to the laser radar, a target detection correlation coefficient from the vehicle-mounted OBU terminal to the camera, a target detection correlation coefficient from the vehicle-mounted OBU terminal to the millimeter wave radar, a target detection correlation coefficient from the laser radar to the vehicle-mounted OBU terminal, a target detection correlation coefficient from the laser radar to the millimeter wave radar, a correlation coefficient from the laser radar to the camera target detection, a target detection correlation coefficient from the millimeter wave radar to the vehicle-mounted OBU terminal, a target detection correlation coefficient from the millimeter wave radar to the laser radar and a target detection correlation coefficient from the millimeter wave radar to the camera are respectively acquired.
Further, the obtaining the confidence of each sensor based on the target detection correlation coefficient includes:
respectively calculating the sum of target detection correlation coefficients of each sensor to other sensors;
and normalizing the sum of the target detection correlation coefficients obtained by the sensors to obtain the confidence coefficient of each sensor.
Further, the scene comprises: clear day scene, rainy day scene, foggy day scene, snow day scene, sheltering from scene and the severe scene of electromagnetic environment, carry out scene judgement based on the confidence coefficient, include:
if the confidence difference between the sensors is not greater than the threshold value 1, determining the scene is a clear day scene;
if the confidence degrees of the camera and the confidence degrees of other sensors are different by more than a threshold value 2 and less than or equal to a threshold value 3, determining the scene in rainy days;
if the difference between the confidence coefficient of the camera and the confidence coefficients of other sensors is larger than a threshold value 3, determining the scene is a foggy day scene;
if the confidence coefficient difference between the camera and the confidence coefficient difference between the vehicle-mounted OBU terminal and the millimeter wave radar is greater than a threshold value 2, and the confidence coefficient difference between the laser radar and the confidence coefficient difference between the vehicle-mounted OBU terminal and the millimeter wave radar is greater than a threshold value 4, determining the scene is a snowy scene;
if the confidence coefficient difference between the vehicle-mounted OBU terminal and the confidence coefficient difference between other sensors are greater than a threshold value 5, and the confidence coefficient difference between the other sensors is not greater than a threshold value 6, determining that the scene is an occlusion scene;
and if the confidence coefficient of the vehicle-mounted OBU terminal is different from the confidence coefficients of other sensors by more than a threshold value 5 and the confidence coefficient difference between the other sensors is not more than a threshold value 7, determining that the scene is a severe electromagnetic environment scene.
Further, the determining the type of the input sensor according to the determined scene includes:
if the scene is a clear day scene, the types of the input sensors comprise a vehicle-mounted OBU terminal, a laser radar, a camera and a millimeter wave radar;
if the scene is a rainy scene, the types of the input sensors comprise a vehicle-mounted OBU terminal, a laser radar and a millimeter wave radar;
if the scene is a foggy day scene, the types of the input sensors comprise a vehicle-mounted OBU terminal, a laser radar and a millimeter wave radar;
if the scene is a snow scene, the type of the input sensor comprises a vehicle-mounted OBU terminal and a millimeter wave radar;
if the scene is an occlusion scene, the type of the input sensor is mainly the vehicle-mounted OBU terminal, and other sensors are selected according to the target detection correlation coefficient from other sensors to the vehicle-mounted OBU terminal;
if the electromagnetic environment is a severe scene, the types of the input sensors comprise a laser radar, a camera and a millimeter wave radar.
Furthermore, a GPS satellite terminal is adopted to perform time synchronization on sensor data acquired by the vehicle-mounted OBU terminal, the vehicle-mounted navigation terminal, the laser radar, the camera and the millimeter wave radar.
The embodiment of the application provides a V2X multisensor fusion device based on scene perception, includes:
the acquisition module is used for acquiring sensor data acquired by each sensor and carrying out time synchronization on the sensor data; the sensor includes: the system comprises a vehicle-mounted OBU terminal, a vehicle-mounted navigation terminal, a laser radar, a camera and a millimeter wave radar;
the processing module is used for preprocessing the sensor data after time synchronization to obtain data to be processed;
the comparison module is used for comparing the data to be processed of each sensor pairwise to obtain a target detection correlation coefficient between the two sensors;
the judging module is used for obtaining the confidence coefficient of each sensor based on the target detection correlation coefficient and carrying out scene judgment based on the confidence coefficient;
and the fusion module is used for determining the type of the input sensor according to the judged scene, determining the weight of each input sensor according to a preset neural network, and fusing data acquired by the corresponding sensor according to the weight to obtain a fusion result.
By adopting the technical scheme, the invention can achieve the following beneficial effects:
the invention provides a scene perception-based V2X multi-sensor fusion method and a scene perception-based V2X multi-sensor fusion device, wherein sensor data acquired by each sensor are subjected to time synchronization to obtain the sensor data after the time synchronization; preprocessing the data of each sensor to obtain respective output results; then, comparing the output information of each sensor pairwise to obtain a target detection correlation coefficient between every two sensors, obtaining confidence scores of various sensors based on the correlation coefficient, and carrying out scene judgment by taking the confidence scores as the basis; and finally, determining the input sensor type of sensor fusion according to the scene, training a neural network to obtain the weight of each sensor, and obtaining the final fusion result. The method and the device can effectively solve the problem of multi-sensor data fusion in different scenes, and enable the V2X to play a greater role in the intelligent networked automobile.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic diagram illustrating steps of a scene-aware-based V2X multi-sensor fusion method according to the present invention;
FIG. 2 is a schematic flow diagram of a scene-aware-based V2X multi-sensor fusion method according to the present invention;
fig. 3 is a schematic structural diagram of the scene-aware-based V2X multi-sensor fusion device of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be described in detail below. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the examples given herein without any inventive step, are within the scope of the present invention.
A specific scene-aware-based V2X multi-sensor fusion method and apparatus provided in the embodiments of the present application are described below with reference to the accompanying drawings.
As shown in fig. 1, the method for scene-aware-based V2X multi-sensor fusion provided in this embodiment of the present application includes:
s101, acquiring sensor data acquired by each sensor, and carrying out time synchronization on the sensor data; as shown in fig. 2, the sensor includes: the system comprises a vehicle-mounted OBU terminal, a vehicle-mounted navigation terminal, a laser radar, a camera and a millimeter wave radar;
in some embodiments, the on-board OBU terminal is configured to obtain V2X information, where the V2X information includes: vehicle ID, location information, GNSS accuracy, speed, direction, acceleration, vehicle size, vehicle type, and timestamp; the position information comprises longitude, latitude and height;
in some embodiments, the GPS satellite terminal is used to perform time synchronization on the sensor data collected by the vehicle-mounted OBU terminal, the vehicle-mounted navigation terminal, the laser radar, the camera, and the millimeter wave radar.
The vehicle-mounted navigation terminal is used for acquiring GNSS information, and the GNSS information comprises: longitude, latitude, altitude, and timestamp;
the lidar is configured to obtain lidar information, the lidar information including: angle, distance, reflection intensity, and timestamp;
the camera is used for obtaining camera information, the camera information includes: current frame image information and a timestamp;
the millimeter wave radar is used for obtaining millimeter wave radar information, and the millimeter wave radar information comprises: distance, angle, speed, and timestamp.
It should be noted that, since different sensors have different data acquisition periods, time synchronization is very important for multi-sensor fusion of V2X communication. For example, the acquisition frequency of V2X is 100ms, the data updating time of the laser radar and the millimeter wave radar is different from dozens of ms, and the acquisition frequency of the camera is 20-30fps. In the specific implementation process, the sensor with the shortest data acquisition period is taken as a reference to determine the data acquisition period T of the system, each sensor performs sensor time synchronization according to the data acquisition frequency of the sensor, and the following description is given by taking the V2X information of the vehicle-mounted OBU terminal as an example:
assuming that the system data acquisition period is determined to be Tms, since the update time of V2X itself is 100ms, V2X data needs to be interpolated according to the time stamp of V2X and the time interval of 100/T, so as to obtain time-synchronized V2X data with Tms as the period.
For the laser radar and the millimeter wave radar, linear or nonlinear interpolation can be carried out according to the current vehicle speed, and the laser radar and the millimeter wave radar data after time synchronization are obtained.
For the camera, the optical flow method can be adopted to perform frame compensation according to the current vehicle speed, so as to obtain the camera data after time synchronization.
S102, preprocessing the sensor data after time synchronization to obtain data to be processed;
firstly, the coordinates of each sensor are converted so as to be unified into a northeast coordinate system with the centers of two rear wheels of the bicycle as an origin,
performing fusion processing on the V2X information and the GNSS information by adopting a Kalman filter to obtain a target detection result;
respectively obtaining a bird's-eye view of the laser radar information, the camera information and the millimeter wave radar information according to the laser radar information, the camera information and the millimeter wave radar information,
adopting a pre-trained CNN neural network to perform target detection on the aerial view of the laser radar information, the camera information and the millimeter wave radar information to respectively obtain target detection results of the laser radar, the camera and the millimeter wave radar;
determining all target detection results as data to be processed;
the target detection result of the laser radar is that the laser radar marks the target position, type and detection confidence in the aerial view; the target detection result of the camera is that the camera marks the position, type and detection confidence of the target in the aerial view; and the target detection result of the millimeter wave radar is that the millimeter wave radar marks the target position, type and detection confidence in the aerial view.
Specifically, the Kalman filter is adopted to perform fusion processing on the V2X information and the GNSS information, so that the accuracy of information such as position and speed is improved. And obtaining V2X information of other vehicles through the vehicle-mounted OBU terminals of the other vehicles according to the information acquired by the vehicle-mounted OBU terminals, and obtaining a bird's-eye view of V2X target detection, wherein the bird's-eye view comprises positions of the own vehicle and the other vehicles in the bird's-eye view, sizes of the vehicles, relative directions and the like.
Projecting the point cloud data of the laser radar in a direction vertical to the height to obtain a bird's-eye view of the point cloud data of the laser radar, and inputting the bird's-eye view into a training CNN neural network to obtain a target detection result of the laser radar; and marking information such as target position, type, detection confidence coefficient and the like in the aerial view as a target detection result.
Projecting data of the millimeter wave radar in a direction vertical to the height to obtain a bird's-eye view of the data of the millimeter wave radar, and inputting the bird's-eye view into a training CNN neural network to obtain a target detection result of the millimeter wave radar; and marking information such as target position, type, detection confidence degree and the like in the aerial view as a target detection result.
Performing perspective transformation on the camera information to obtain a bird's-eye view of the camera, and inputting the bird's-eye view into a training CNN neural network to obtain a target detection result of the camera; and the target detection result is information such as the position, type, detection confidence coefficient and the like of the target marked in the aerial view.
S103, comparing the data to be processed of each sensor pairwise to obtain a target detection correlation coefficient between the two sensors;
specifically, two-by-two comparison is performed on target detection results obtained by each sensor to obtain a target detection correlation coefficient between two sensors, and in the following, taking V2X and laser radar information as an example, it is assumed that V2X obtains the area of a bounding box of a certain target on a V2X aerial viewS V The laser radar obtains the area of the same target on the laser radar aerial view asS D The correlation coefficient of the detection of the lidar target by the V2X target is calculated in the following manner.
Figure 979284DEST_PATH_IMAGE001
(1)
Wherein the content of the first and second substances,
Figure 121684DEST_PATH_IMAGE002
and the area of the intersection part of the area of the target in the bird's-eye view obtained by V2X detection and the area of the target in the bird's-eye view obtained by the laser radar is shown. Similarly, correlation coefficients of millimeter wave radar target detection detected by the V2X target and camera target detection detected by the V2X target can be obtained respectively.
The target detection correlation coefficient of the laser radar reaching V2X is obtained through the following formula,
Figure 632168DEST_PATH_IMAGE003
(2)
in the same way, the correlation coefficient of target detection from the laser radar to the millimeter wave radar and the correlation coefficient of target detection from the laser radar to the camera can be respectively obtained.
By adopting the same mode, the target detection correlation coefficients from the millimeter wave radar to the other three sensors and the target detection correlation coefficients from the camera to the other three sensors can be respectively obtained.
S104, obtaining the confidence coefficient of each sensor based on the target detection correlation coefficient, and carrying out scene judgment based on the confidence coefficient;
secondly, the confidence scores of various sensors are obtained by adopting an addition and normalization method. The specific method comprises the following steps:
Figure 176413DEST_PATH_IMAGE004
(3)
wherein the content of the first and second substances,
Figure 546608DEST_PATH_IMAGE005
represents the sum of the correlation coefficients of the V2X target detection versus the other sensor target detection. Respectively obtaining by the calculation method of formula (3)
Figure 612522DEST_PATH_IMAGE006
. Wherein the content of the first and second substances,
Figure 224900DEST_PATH_IMAGE007
represents the target detection correlation coefficient of the laser radar,
Figure 113398DEST_PATH_IMAGE008
represents a target detection correlation coefficient of the millimeter wave radar,
Figure 356291DEST_PATH_IMAGE009
and representing the target detection correlation coefficient of the camera.
The confidence of V2X is obtained by normalization in the following manner, and the confidence of each of the other sensors is obtained in the same manner.
Figure 971818DEST_PATH_IMAGE010
(4)
Finally, based on the confidence score, judging a scene, wherein the scene comprises: clear day scene, rainy day scene, foggy day scene, snow day scene, sheltered from scene and the severe scene of electromagnetic environment, carry out scene judgement based on the confidence coefficient, include:
if the confidence difference between the sensors is not greater than the threshold value 1, determining the scene is a clear day scene;
if the confidence degrees of the camera and the confidence degrees of other sensors are different by more than a threshold value 2 and less than or equal to a threshold value 3, determining the scene in rainy days;
if the difference between the confidence coefficient of the camera and the confidence coefficients of other sensors is larger than a threshold value 3, determining the scene is a foggy day scene;
if the confidence coefficient difference between the camera and the confidence coefficient difference between the vehicle-mounted OBU terminal and the millimeter wave radar is greater than a threshold value 2, and the confidence coefficient difference between the laser radar and the confidence coefficient difference between the vehicle-mounted OBU terminal and the millimeter wave radar is greater than a threshold value 4, determining the scene is a snowy scene;
if the confidence coefficient difference between the vehicle-mounted OBU terminal and the confidence coefficient difference between other sensors are greater than a threshold value 5, and the confidence coefficient difference between the other sensors is not greater than a threshold value 6, determining that the scene is an occlusion scene;
and if the confidence coefficient of the vehicle-mounted OBU terminal is different from the confidence coefficients of other sensors by more than a threshold value 5 and the confidence coefficient difference between the other sensors is not more than a threshold value 7, determining that the scene is a severe electromagnetic environment scene.
And S105, determining the type of the input sensor according to the judged scene, determining the weight of each input sensor according to a preset neural network, and fusing data acquired by the corresponding sensor according to the weight to obtain a fusion result.
In some embodiments, the determining the type of the sensor input according to the determined scene includes:
if the scene is a clear day scene, the types of the input sensors comprise a vehicle-mounted OBU terminal, a laser radar, a camera and a millimeter wave radar;
if the scene is rainy, the types of the input sensors comprise a vehicle-mounted OBU terminal, a laser radar and a millimeter wave radar;
if the scene is a foggy day scene, the types of the input sensors comprise a vehicle-mounted OBU terminal, a laser radar and a millimeter wave radar;
if the scene is a snow scene, the type of the input sensor comprises a vehicle-mounted OBU terminal and a millimeter wave radar;
if the scene is an occlusion scene, the type of the input sensor is mainly the vehicle-mounted OBU terminal, and other sensors are selected according to the target detection correlation coefficient from other sensors to the vehicle-mounted OBU terminal;
if the electromagnetic environment is a severe scene, the types of the input sensors comprise a laser radar, a camera and a millimeter wave radar.
It can be understood that, in the present application, a self-adaptive fusion algorithm is adopted to fuse the data collected by the corresponding sensors according to the weight to obtain a fusion result.
The working principle of the scene perception-based V2X multi-sensor fusion method is as follows: firstly, accessing V2X information, GNSS information, a laser radar, a camera and millimeter wave radar data into a central computing platform for time synchronization to obtain V2X data, laser radar data, camera data and millimeter wave radar data after time synchronization; secondly, the V2X information is processed by Kalman filtering, so that the precision of the data is improved. Respectively passing the laser radar data, the camera data and the millimeter wave radar data through a CNN neural network to obtain respective output results; then, comparing the output information of each sensor pairwise to obtain a target detection correlation coefficient between every two sensors, obtaining confidence scores of various sensors based on the correlation coefficient, and carrying out scene judgment by taking the confidence scores as the basis; and finally, determining the input sensor type of sensor fusion according to the scene, training a neural network to obtain the weight of each sensor, and obtaining the final fusion result.
In some embodiments, as shown in fig. 3, an embodiment of the present application provides a scene-aware-based V2X multi-sensor fusion apparatus, including:
an obtaining module 201, configured to obtain sensor data acquired by each sensor, and perform time synchronization on the sensor data; the sensor includes: the system comprises a vehicle-mounted OBU terminal, a vehicle-mounted navigation terminal, a laser radar, a camera and a millimeter wave radar;
the processing module 202 is configured to perform preprocessing on the sensor data after time synchronization to obtain to-be-processed data;
the comparison module 203 is used for comparing the data to be processed of each sensor pairwise to obtain a target detection correlation coefficient between the two sensors;
a judging module 204, configured to obtain confidence levels of the sensors based on the target detection correlation coefficient, and perform scene judgment based on the confidence levels;
the fusion module 205 is configured to determine the type of the input sensor according to the determined scene, determine the weight of each input sensor according to a preset neural network, and fuse data acquired by the corresponding sensor according to the weight to obtain a fusion result.
The working principle of the scene-sensing-based V2X multi-sensor fusion device provided by the application is that an acquisition module 201 acquires sensor data acquired by each sensor and performs time synchronization on the sensor data; the sensor includes: the system comprises a vehicle-mounted OBU terminal, a vehicle-mounted navigation terminal, a laser radar, a camera and a millimeter wave radar; the processing module 202 preprocesses the sensor data after time synchronization to obtain data to be processed; the comparison module 203 compares the data to be processed of each sensor pairwise to obtain a target detection correlation coefficient between the two sensors; the judging module 204 obtains the confidence of each sensor based on the target detection correlation coefficient, and performs scene judgment based on the confidence; the fusion module 205 determines the type of the input sensor according to the determined scene, determines the weight of each input sensor according to a preset neural network, and fuses data acquired by the corresponding sensor according to the weight to obtain a fusion result.
It is to be understood that the embodiments of the method provided above correspond to the embodiments of the apparatus described above, and the corresponding specific contents may be referred to each other, which is not described herein again.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (10)

1. A V2X multi-sensor fusion method based on scene perception is characterized by comprising the following steps:
acquiring sensor data acquired by each sensor, and carrying out time synchronization on the sensor data; the sensor includes: the system comprises a vehicle-mounted OBU terminal, a vehicle-mounted navigation terminal, a laser radar, a camera and a millimeter wave radar;
preprocessing the sensor data after time synchronization to obtain data to be processed;
comparing the data to be processed of each sensor pairwise to obtain a target detection correlation coefficient between the two sensors;
obtaining the confidence coefficient of each sensor based on the target detection correlation coefficient, and carrying out scene judgment based on the confidence coefficient;
determining the type of the input sensor according to the judged scene, determining the weight of each input sensor according to a preset neural network, and fusing data collected by the corresponding sensors according to the weight to obtain a fusion result.
2. The method of claim 1,
the vehicle-mounted OBU terminal is used for acquiring V2X information, and the V2X information comprises: vehicle ID, location information, GNSS accuracy, speed, direction, acceleration, vehicle size, vehicle type, and timestamp; the position information comprises longitude, latitude and height;
the vehicle-mounted navigation terminal is used for acquiring GNSS information, and the GNSS information comprises: longitude, latitude, altitude, and timestamp;
the lidar is configured to obtain lidar information, the lidar information including: angle, distance, reflection intensity, and timestamp;
the camera is used for acquireing camera information, camera information includes: current frame image information and a timestamp;
the millimeter wave radar is used for obtaining millimeter wave radar information, and the millimeter wave radar information comprises: distance, angle, speed, and timestamp.
3. The method of claim 2, wherein the pre-processing of the time-synchronized sensor data by coordinate transformation of the sensors to unify their coordinates into a northeast coordinate system with the center of the two rear wheels of the bicycle as the origin comprises:
performing fusion processing on the V2X information and the GNSS information by using a Kalman filter to obtain a target detection result;
respectively obtaining a bird's-eye view of the laser radar information, the camera information and the millimeter wave radar information according to the laser radar information, the camera information and the millimeter wave radar information,
adopting a pre-trained CNN neural network to perform target detection on the laser radar information, the camera information and the aerial view of the millimeter wave radar information to respectively obtain target detection results of the laser radar, the camera and the millimeter wave radar;
determining all target detection results as data to be processed;
the target detection result of the laser radar is that the laser radar marks the target position, type and detection confidence in the aerial view; the target detection result of the camera is that the camera marks the position, type and detection confidence of the target in the aerial view; and the target detection result of the millimeter wave radar is that the millimeter wave radar marks the target position, type and detection confidence coefficient in the aerial view.
4. The method according to claim 3, wherein the fusing the V2X information and the GNSS information by using the kalman filter comprises:
obtaining a bird's-eye view of V2X target detection according to V2X information acquired by a vehicle-mounted OBU terminal of a vehicle and V2X information of other vehicles acquired by vehicle-mounted OBU terminals of other vehicles; the bird's eye view includes the location of the own vehicle and other vehicles in the bird's eye view, the size and relative orientation of the vehicles.
5. The method according to claim 2, wherein the comparing the data to be processed of each sensor two by two to obtain the target detection correlation coefficient between two sensors comprises:
according to the acquired data to be processed, a target detection correlation coefficient from the vehicle-mounted OBU terminal to the laser radar, a target detection correlation coefficient from the vehicle-mounted OBU terminal to the camera, a target detection correlation coefficient from the vehicle-mounted OBU terminal to the millimeter wave radar, a target detection correlation coefficient from the laser radar to the vehicle-mounted OBU terminal, a target detection correlation coefficient from the laser radar to the millimeter wave radar, a correlation coefficient from the laser radar to the camera target detection, a target detection correlation coefficient from the millimeter wave radar to the vehicle-mounted OBU terminal, a target detection correlation coefficient from the millimeter wave radar to the laser radar and a target detection correlation coefficient from the millimeter wave radar to the camera are respectively acquired.
6. The method of claim 5, wherein the deriving the confidence level for each sensor based on the target detection correlation coefficient comprises:
respectively calculating the sum of target detection correlation coefficients of each sensor to other sensors;
and normalizing the sum of the target detection correlation coefficients obtained by the sensors to obtain the confidence coefficient of each sensor.
7. The method of claim 6, wherein the scene comprises: clear day scene, rainy day scene, foggy day scene, snow day scene, sheltered from scene and the severe scene of electromagnetic environment, carry out scene judgement based on the confidence coefficient, include:
if the confidence difference between the sensors is not greater than the threshold value 1, determining the scene is a clear day scene;
if the confidence degrees of the camera and the confidence degrees of other sensors are different by more than a threshold value 2 and less than or equal to a threshold value 3, determining the scene is a rainy scene;
if the difference between the confidence coefficient of the camera and the confidence coefficients of other sensors is larger than a threshold value 3, determining the scene is a foggy day scene;
if the confidence coefficient difference between the camera and the confidence coefficient difference between the vehicle-mounted OBU terminal and the millimeter wave radar is greater than a threshold value 2, and the confidence coefficient difference between the laser radar and the confidence coefficient difference between the vehicle-mounted OBU terminal and the millimeter wave radar is greater than a threshold value 4, determining the scene is a snowy scene;
if the confidence coefficient difference between the vehicle-mounted OBU terminal and the confidence coefficient difference between other sensors are greater than a threshold value 5, and the confidence coefficient difference between the other sensors is not greater than a threshold value 6, determining that the scene is an occlusion scene;
and if the difference between the confidence of the on-board OBU terminal and the confidence of other sensors is larger than a threshold value 5 and the difference between the confidence of other sensors is not larger than a threshold value 7, determining that the scene is a severe electromagnetic environment scene.
8. The method of claim 7, wherein determining the type of the input sensor according to the determined scene comprises:
if the scene is a clear day scene, the types of the input sensors comprise a vehicle-mounted OBU terminal, a laser radar, a camera and a millimeter wave radar;
if the scene is a rainy scene, the types of the input sensors comprise a vehicle-mounted OBU terminal, a laser radar and a millimeter wave radar;
if the scene is a foggy day scene, the types of the input sensors comprise a vehicle-mounted OBU terminal, a laser radar and a millimeter wave radar;
if the scene is a snow scene, the type of the input sensor comprises a vehicle-mounted OBU terminal and a millimeter wave radar;
if the scene is an occlusion scene, the type of the input sensor is mainly the vehicle-mounted OBU terminal, and other sensors are selected according to the target detection correlation coefficient from other sensors to the vehicle-mounted OBU terminal;
if the electromagnetic environment is a severe scene, the types of the input sensors comprise a laser radar, a camera and a millimeter wave radar.
9. The method of claim 1,
and the GPS satellite terminal is adopted to perform time synchronization on the sensor data acquired by the vehicle-mounted OBU terminal, the vehicle-mounted navigation terminal, the laser radar, the camera and the millimeter wave radar.
10. A V2X multi-sensor fusion device based on scene perception is characterized by comprising:
the acquisition module is used for acquiring sensor data acquired by each sensor and carrying out time synchronization on the sensor data; the sensor includes: the system comprises a vehicle-mounted OBU terminal, a vehicle-mounted navigation terminal, a laser radar, a camera and a millimeter wave radar;
the processing module is used for preprocessing the sensor data after time synchronization to obtain data to be processed;
the comparison module is used for comparing the data to be processed of each sensor pairwise to obtain a target detection correlation coefficient between the two sensors;
the judging module is used for obtaining the confidence coefficient of each sensor based on the target detection correlation coefficient and carrying out scene judgment based on the confidence coefficient;
and the fusion module is used for determining the type of the input sensor according to the judged scene, determining the weight of each input sensor according to a preset neural network, and fusing data acquired by the corresponding sensor according to the weight to obtain a fusion result.
CN202211314591.0A 2022-10-26 2022-10-26 Scene perception-based V2X multi-sensor fusion method and device Active CN115379408B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211314591.0A CN115379408B (en) 2022-10-26 2022-10-26 Scene perception-based V2X multi-sensor fusion method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211314591.0A CN115379408B (en) 2022-10-26 2022-10-26 Scene perception-based V2X multi-sensor fusion method and device

Publications (2)

Publication Number Publication Date
CN115379408A true CN115379408A (en) 2022-11-22
CN115379408B CN115379408B (en) 2023-01-13

Family

ID=84074035

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211314591.0A Active CN115379408B (en) 2022-10-26 2022-10-26 Scene perception-based V2X multi-sensor fusion method and device

Country Status (1)

Country Link
CN (1) CN115379408B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116346862A (en) * 2023-05-26 2023-06-27 斯润天朗(无锡)科技有限公司 Sensor sharing method and device for intelligent network-connected automobile

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108983219A (en) * 2018-08-17 2018-12-11 北京航空航天大学 A kind of image information of traffic scene and the fusion method and system of radar information
CN111142528A (en) * 2019-12-31 2020-05-12 天津职业技术师范大学(中国职业培训指导教师进修中心) Vehicle dangerous scene sensing method, device and system
US11001231B1 (en) * 2019-02-28 2021-05-11 Ambarella International Lp Using camera data to manage a vehicle parked outside in cold climates
CN113177428A (en) * 2020-01-27 2021-07-27 通用汽车环球科技运作有限责任公司 Real-time active object fusion for object tracking
CN113657270A (en) * 2021-08-17 2021-11-16 江苏熙枫智能科技有限公司 Unmanned aerial vehicle tracking method based on deep learning image processing technology
CN113734203A (en) * 2021-09-23 2021-12-03 中汽创智科技有限公司 Control method, device and system for intelligent driving and storage medium
WO2022166606A1 (en) * 2021-02-07 2022-08-11 华为技术有限公司 Target detection method and apparatus

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108983219A (en) * 2018-08-17 2018-12-11 北京航空航天大学 A kind of image information of traffic scene and the fusion method and system of radar information
US11001231B1 (en) * 2019-02-28 2021-05-11 Ambarella International Lp Using camera data to manage a vehicle parked outside in cold climates
CN111142528A (en) * 2019-12-31 2020-05-12 天津职业技术师范大学(中国职业培训指导教师进修中心) Vehicle dangerous scene sensing method, device and system
CN113177428A (en) * 2020-01-27 2021-07-27 通用汽车环球科技运作有限责任公司 Real-time active object fusion for object tracking
WO2022166606A1 (en) * 2021-02-07 2022-08-11 华为技术有限公司 Target detection method and apparatus
CN113657270A (en) * 2021-08-17 2021-11-16 江苏熙枫智能科技有限公司 Unmanned aerial vehicle tracking method based on deep learning image processing technology
CN113734203A (en) * 2021-09-23 2021-12-03 中汽创智科技有限公司 Control method, device and system for intelligent driving and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
黄武陵: "智能车辆环境感知技术与平台构建", 《单片机与嵌入式系统应用》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116346862A (en) * 2023-05-26 2023-06-27 斯润天朗(无锡)科技有限公司 Sensor sharing method and device for intelligent network-connected automobile
CN116346862B (en) * 2023-05-26 2023-10-24 斯润天朗(无锡)科技有限公司 Sensor sharing method and device for intelligent network-connected automobile

Also Published As

Publication number Publication date
CN115379408B (en) 2023-01-13

Similar Documents

Publication Publication Date Title
CN110588653B (en) Control system, control method and controller for autonomous vehicle
CN109556615B (en) Driving map generation method based on multi-sensor fusion cognition of automatic driving
CN108801276B (en) High-precision map generation method and device
CN111142091B (en) Automatic driving system laser radar online calibration method fusing vehicle-mounted information
CN111860493A (en) Target detection method and device based on point cloud data
CN106257242A (en) For regulating unit and the method for road boundary
CN110745140A (en) Vehicle lane change early warning method based on continuous image constraint pose estimation
CN112650220A (en) Automatic vehicle driving method, vehicle-mounted controller and system
CN113085852A (en) Behavior early warning method and device for automatic driving vehicle and cloud equipment
CN110208783B (en) Intelligent vehicle positioning method based on environment contour
CN112348848A (en) Information generation method and system for traffic participants
CN109900490B (en) Vehicle motion state detection method and system based on autonomous and cooperative sensors
CN117576652B (en) Road object identification method and device, storage medium and electronic equipment
CN115379408B (en) Scene perception-based V2X multi-sensor fusion method and device
CN114063090A (en) Mobile equipment positioning method and device and mobile equipment
CN113191030A (en) Automatic driving test scene construction method and device
CN113252022A (en) Map data processing method and device
CN113252051A (en) Map construction method and device
CN115205803A (en) Automatic driving environment sensing method, medium and vehicle
CN115923839A (en) Vehicle path planning method
US20180114075A1 (en) Image processing apparatus
CN113962301A (en) Multi-source input signal fused pavement quality detection method and system
CN108973995B (en) Environment perception data processing method and device for driving assistance and vehicle
CN117416349A (en) Automatic driving risk pre-judging system and method based on improved YOLOV7-Tiny and SS-LSTM in V2X environment
EP4141482A1 (en) Systems and methods for validating camera calibration in real-time

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant