CN114415173A - Fog-penetrating target identification method for high-robustness laser-vision fusion - Google Patents

Fog-penetrating target identification method for high-robustness laser-vision fusion Download PDF

Info

Publication number
CN114415173A
CN114415173A CN202210047699.1A CN202210047699A CN114415173A CN 114415173 A CN114415173 A CN 114415173A CN 202210047699 A CN202210047699 A CN 202210047699A CN 114415173 A CN114415173 A CN 114415173A
Authority
CN
China
Prior art keywords
target
image
visual
fog
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210047699.1A
Other languages
Chinese (zh)
Inventor
毕欣
许志秋
熊璐
张博
杨士超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN202210047699.1A priority Critical patent/CN114415173A/en
Publication of CN114415173A publication Critical patent/CN114415173A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Electromagnetism (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a fog-penetrating target recognition method based on high robustness radar-vision fusion, which belongs to the technical field of vehicle recognition, and is characterized in that detection data acquired by a millimeter wave radar and visual information acquired by a camera are fused and improved based on ROS, the advantage that the millimeter wave radar is not easily affected by haze weather is exerted, information supplement is provided for an image defogging algorithm, a dark channel prior defogging algorithm is optimized by combining with driving scene characteristic information, a defogging threshold value is determined by taking the average transmittance of an image as an evaluation index aiming at the problem of low defogging efficiency of the image, and atmospheric light value optimization is carried out by using a three-frame difference method; aiming at the problem of visual omission in foggy days, the region of interest in the foggy image is acquired by combining the transverse distance information of the millimeter wave radar, the transmissivity is recalculated by utilizing the target distance information acquired by the millimeter wave radar, the visual omission is effectively avoided, and the accuracy and the robustness of vehicle identification are improved.

Description

Fog-penetrating target identification method for high-robustness laser-vision fusion
Technical Field
The invention relates to the technical field of vehicle identification, in particular to a fog-penetrating target identification method with high robustness of radar-vision fusion.
Background
The existing target detection and identification of millimeter wave radar and vision fusion has the following defects in research: firstly, when the existing image processing algorithm is used for processing the foggy image, the integral defogging effect of the foggy image is influenced by the errors of the transmissivity and the atmospheric light value; secondly, target matching is carried out on the visual information of the camera without detection information based on the millimeter wave radar to verify whether a visual missed detection target exists; finally, the visual information of the camera is not processed and identified again based on the visual missed detection target, so that the detection and identification accuracy of the vehicle in special weather is low.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a fog-penetrating target identification method with high robustness in radar vision fusion, and the following technical problems are solved: how to solve the technical problem that the accuracy of target detection and identification is low under special weather of millimeter wave radar and vision fusion in the existing scheme.
The purpose of the invention can be realized by the following technical scheme:
a fog-penetrating target identification method for high robustness radar-vision fusion includes the following specific steps:
respectively acquiring a foggy image set and detection data of a target vehicle through a camera and a millimeter wave radar, wherein the detection data comprises a plurality of CAN messages, performing defogging pretreatment on the foggy image set to obtain a defogged image set, acquiring a timestamp of a defogged image in the defogged image set, and setting the timestamp as an image timestamp;
carrying out visual target detection on the defogging image set to obtain a target detection set containing a plurality of visual targets;
analyzing a CAN message in the detection data, screening the CAN message through an ID (identity) and extracting target data, wherein the target data comprise a target ID, a target abscissa and ordinate, a target transverse and longitudinal relative speed, a target motion state and a target scattering sectional area, and storing the target data into a constructed target list by taking the target ID as an index;
acquiring a timestamp when a message is received, setting the timestamp as a message timestamp, and performing matching screening on a visual target and target data according to the image timestamp and the message timestamp to obtain a matching set;
acquiring a detection target coordinate set of the millimeter wave radar in the matching set and a visual target coordinate set of the camera according to a world coordinate system to perform target matching, and judging whether unmatched targets exist in the detection target coordinate set of the millimeter wave radar and the visual target coordinate set of the camera or not;
if yes, setting the target as a visual missed detection target and generating a missed detection instruction; if not, generating a first identification set;
acquiring an interested region of the defogged image according to the missing inspection instruction and the visual missing inspection target, and performing local defogging on the interested region to obtain a defogging region;
performing visual target detection on the defogging area to obtain third target information of the defogging image, and performing secondary target matching on the third target information and a detection target coordinate set to generate a second identification set; the first recognition set and the second recognition set constitute a recognition result.
Further, the specific steps of performing defogging preprocessing on the fogging image set include:
simulating the foggy image set through the simulation environment to obtain the average transmittance of the foggy images in the foggy image set;
matching the average transmissivity with a preset fog penetration threshold value, and setting a fog image corresponding to the average transmissivity lower than the fog penetration threshold value as a selected image;
analyzing atmospheric light values of a plurality of selected images based on a three-frame difference method to judge whether estimation is needed, and the specific steps comprise:
obtaining a dark channel intensity value omega of a selected image pixel and a brightness value L of the pixel, and analyzing a plurality of frames of images on the selected image through a discriminant expression, wherein the discriminant expression is as follows:
Figure BDA0003472971650000021
Figure BDA0003472971650000022
wherein (x)i,yi) The pixel coordinate of the atmospheric light value in the ith frame of image is represented, T is a difference value binarization threshold value, and D is a binarization difference value of the atmospheric light values of two adjacent frames of images;
to DiAnd Di+1Taking and analyzing the intersection, and when the intersection result is 1, judging that the atmospheric light value has a large error and re-estimating the atmospheric light value;
when the intersection result is not 1, judging that the atmospheric light value error is small and setting the atmospheric light value as a corrected atmospheric light value;
obtaining a corrected transmittance of the selected image based on the atmospheric scattering coefficient and the scene depth:
the calculation formula of the corrected transmittance is as follows:
t(x)=e-βd(x)
wherein t (x) is corrected transmittance, beta is atmospheric scattering coefficient, d (x) is scene depth, and the scene depth is obtained through millimeter wave radar detection;
training the corrected atmospheric light value and the corrected transmittance through a fog image scattering model to obtain a defogged image, wherein the fog image scattering model is I (x) ═ J (x) t (x) + A [1-t (x) ], I (x) is a fog image, J (x) is a defogged image, and A is the corrected atmospheric light value;
and combining the defogged images according to time sequence to obtain a defogged image set.
Further, the visual target detection is carried out by a convolutional neural network-based Yolov3 target detection method.
Further, the specific steps of matching and screening the visual target and the target data according to the image timestamp and the message timestamp comprise:
and acquiring an image timestamp and a message timestamp, matching and screening based on a timestamp close alignment strategy, and arranging and combining visual target and target data corresponding to the image timestamp and the message timestamp which accord with the timestamp range according to time to obtain a matching set.
Further, coordinate systems are respectively established for the millimeter wave radar and the camera to obtain a millimeter wave radar coordinate system and a camera coordinate system, the millimeter wave radar coordinate system and the camera coordinate system are connected through a world coordinate system, and the world coordinate system X iswYwZw-OwThe specific steps of the establishment comprise:
selecting a position of a rear axle center of a monitoring vehicle, which is 0 away from the ground, as an original point, taking the driving direction of the monitoring vehicle as the positive direction of a z-axis, the upward direction vertical to the ground as the positive direction of a y-axis, and the leftward direction vertical to a central axis of a vehicle body as the positive direction of an x-axis;
the target r measured by the millimeter wave radar is positioned in the coordinate system X of the millimeter wave radarrOrZrIn the plane, and X in the world coordinate systemwOwZwPlane parallel, thus X of millimeter-wave radarrAnd ZrThe coordinate is the y coordinate and the x coordinate obtained by analyzing the CAN message;
(x) in target data measured for millimeter wave radarr,yr) The conversion with the world coordinate system is realized by a first conversion formula which is shown as follows:
Figure BDA0003472971650000041
wherein, the offset of the millimeter wave radar in the world coordinate system is Tr=(Rx,Ry,Rz)T
According to the cameraThe coordinate system obtains an image physical coordinate system and an image pixel coordinate system, and the coordinate of a target point P in the camera coordinate system is (X)c,Yc,Zc) The unit is meter, and the corresponding target point P coordinate in the image physical coordinate system is (X)c’,Yc') in meters, the coordinates of the corresponding target point P in the image pixel coordinate system are (u, v), the units are pixels, the conversion between the target point and the world coordinate system in the pixel coordinate system is realized by a second conversion formula, which is shown below:
Figure BDA0003472971650000042
where u and v are the abscissa and ordinate of the target point in the image pixel coordinate system, ZcIs the vertical coordinate of the camera coordinate system, f is the focal length of the camera, dx and dy are the horizontal width and the vertical width of a single pixel, u0And v0Is the abscissa and ordinate of the origin of the image physical coordinate system in the image pixel coordinate system, R is a rotation matrix, TcIs a translation vector;
the converted target points in the millimeter wave radar coordinate system and the camera coordinate system are connected, and the relationship between the target data measured by the millimeter wave radar and the corresponding points of the image is as follows:
Figure BDA0003472971650000043
setting the number of targets acquired by the millimeter wave radar as m, and setting the number of targets acquired by the camera as n, and generating the following incidence matrix:
Figure BDA0003472971650000051
wherein d isijIndicating that the millimeter wave radar target and the camera target are in XwOwZwThe in-plane distance, i.e.:
Figure BDA0003472971650000052
wherein, XcwAnd ZcwAs camera target coordinates, XrwAnd ZrwEstablishing a bipartite graph for the target coordinates of the millimeter wave radar based on a KM algorithm, wherein a point set on the bipartite graph is a target obtained by the millimeter wave radar and a camera, and the weight of a connecting line between a point and a point is a distance dijMatching the millimeter wave radar target and the camera target is converted into searching the minimum weight matching of the weighted bipartite graph, and a matched distance list is obtained;
and matching each element in the distance list with a preset distance threshold, if the element is greater than the distance threshold, judging that the millimeter wave radar target is unsuccessfully matched with the camera target, and setting the target as a visual missed inspection target.
Further, acquiring an interested area of the defogged image according to the visual missed detection target;
acquiring a vertical coordinate of the visual undetected target through a millimeter wave radar, and determining a coordinate set of the identification frame according to the vertical coordinate; converting the coordinate set of the identification frame to obtain a matching coordinate set corresponding to a camera coordinate system, and acquiring an interested area of the defogged image according to the matching coordinate set;
and acquiring the scene depth of the visual missed inspection target through a millimeter wave radar, acquiring the transmittance corresponding to the region of interest, setting the transmittance as a processing transmittance, and performing local defogging on the region of interest through the corrected atmospheric light value and the processing transmittance by a foggy image scattering model to obtain a defogged region.
Further, the specific step of determining the coordinate set of the recognition frame according to the ordinate includes: and moving left and right according to the ordinate and a preset distance to obtain a plurality of moving abscissas, multiplying the plurality of moving abscissas by the correction coefficient to obtain a plurality of processing abscissas, and determining a coordinate set of the identification frame according to the plurality of processing abscissas and the ordinate.
Compared with the prior art, the invention has the beneficial effects that:
in the invention, detection data acquired by a millimeter wave radar and visual information acquired by a camera are fused and improved based on ROS, the advantage that the millimeter wave radar is not easily affected by haze weather is exerted, information supplement is provided for an image defogging algorithm, a dark channel prior defogging algorithm is optimized by combining with driving scene characteristic information, a defogging threshold value is determined by taking the average transmittance of an image as an evaluation index aiming at the problem of low defogging efficiency of the image, and the atmospheric light value is optimized by using a three-frame difference method; aiming at the problem of visual omission in foggy days, a region of interest in a foggy image is acquired by combining with the transverse distance information of the millimeter wave radar, the target distance information acquired by the millimeter wave radar is utilized to recalculate the transmissivity, the visual omission is effectively avoided, the stability and the reliability of the multi-sensor system are improved, the millimeter wave radar is farther in detection distance compared with a camera, the information such as the direction and the speed of a target is more accurately measured, and the sensors are subjected to information fusion based on the characteristic of complementation of the millimeter wave radar and the camera so as to realize data complementation and redundancy between the sensors, thereby acquiring more complete target information.
Drawings
FIG. 1 is a flow chart of a fog-penetrating target identification method of high robustness laser-vision fusion according to the invention.
FIG. 2 is a schematic block diagram of a fog-penetrating target identification method of high robustness laser-vision fusion according to the invention.
Detailed Description
Referring to fig. 1-2, the invention relates to a high robustness radar fusion fog-penetrating target identification method, which comprises the following specific steps:
respectively acquiring a foggy image set and detection data of a target vehicle through a camera and a millimeter wave radar, wherein the detection data comprises a plurality of CAN messages;
carrying out defogging pretreatment on the defogged image set to obtain a defogged image set, and acquiring a timestamp of the defogged image in the defogged image set and setting the timestamp as an image timestamp; the method comprises the following specific steps:
simulating the foggy image set through the simulation environment to obtain the average transmittance of the foggy images in the foggy image set;
matching the average transmissivity with a preset fog penetration threshold value, and setting a fog image corresponding to the average transmissivity lower than the fog penetration threshold value as a selected image;
in this embodiment, under actual driving conditions, clear weather is the majority of cases, so a threshold value can be set for the defogging algorithm, that is, the image is defogged only when the haze concentration of the image is greater than the threshold value, so as to avoid wasting computing resources; according to the atmospheric scattering model, the transmittance of the image can represent the fog degree of the image; simulating by using a haze degree standard of 50% -100% (taking 10% as a gradient) in the SVL-Simulator, outputting the average transmittance of the original image, sending the image before and after defogging to YOLOv3 for target detection, and comparing whether the target detection results are consistent; when the average transmittance of the image is near 0.4, the target detection result is basically consistent whether the image is defogged or not; when the average transmittance is lower than 0.4, the target detection accuracy rate is increased after the image is defogged; when the average transmittance of the image is greater than 0.4, the fog penetration treatment enables the color distortion generated by the image to be more obvious, so that the target detection accuracy rate is even reduced to some extent; therefore, the image fog penetration threshold is set to be 0.4, namely, the image is defogged when the average image transmittance is lower than 0.4, so that the defogging time can be reduced, the progress of the whole algorithm is accelerated, and the color distortion of the image under the normal weather condition is avoided;
it should be noted that, when the atmospheric light value is estimated, if the intensity of the dark channel map at the pixel position of the original atmospheric light value or the brightness of the original map changes, the atmospheric light estimation needs to be performed again; analyzing atmospheric light values of a plurality of selected images based on a three-frame difference method to judge whether estimation is needed, and the specific steps comprise:
obtaining a dark channel intensity value omega of a selected image pixel and a brightness value L of the pixel, and analyzing a plurality of frames of images on the selected image through a discriminant expression, wherein the discriminant expression is as follows:
Figure BDA0003472971650000071
Figure BDA0003472971650000072
wherein (x)i,yi) The pixel coordinate of the atmospheric light value in the ith frame of image is represented, T is a difference value binarization threshold value, and D is a binarization difference value of the atmospheric light values of two adjacent frames of images;
to DiAnd Di+1Taking and analyzing the intersection, and when the intersection result is 1, judging that the atmospheric light value has a large error and re-estimating the atmospheric light value;
when the intersection result is not 1, judging that the atmospheric light value error is small and setting the atmospheric light value as a corrected atmospheric light value;
in this embodiment, as can be known from the principle of the dark channel prior defogging algorithm, the defogging process mainly consists in the estimation of the atmospheric light value and the estimation of the transmittance, and for each frame of image, the algorithm needs to recalculate the atmospheric light value, however, under the actual driving condition, the changes of the surrounding environment and the weather are continuous, and the sudden change of the atmospheric light value rarely occurs, so that when the defogging process is performed, the atmospheric light value needs to be re-estimated only when the brightness change of the pixel where the atmospheric light value of the adjacent image is located is obvious;
because most of the image information collected in the driving process is a high-speed moving object, the improved interframe difference method-three-frame difference method can be used for judging, continuous three-frame images can be simultaneously compared and judged, and the comparison range is expanded.
Obtaining a corrected transmittance of the selected image based on the atmospheric scattering coefficient and the scene depth:
the calculation formula of the corrected transmittance is as follows:
t(x)=e-βd(x)
wherein, t (x) is corrected transmittance, beta is atmospheric scattering coefficient, d (x) is scene depth, and the scene depth is acquired by millimeter wave radar detection, namely the distance between the millimeter wave radar and a target vehicle; by means of the millimeter wave radar, real depth information of the region where a part of targets are located in the image can be obtained, and a more accurate transmittance value can be obtained, so that the image defogging effect is improved;
training the corrected atmospheric light value and the corrected transmittance through a fog image scattering model to obtain a defogged image, wherein the fog image scattering model is I (x) ═ J (x) t (x) + A [1-t (x) ], I (x) is a fog image, J (x) is a defogged image, and A is the corrected atmospheric light value;
the defogging images are combined according to time sequence to obtain a defogging image set;
in this embodiment, an improvement measure of the dark channel advanced inspection algorithm is provided for the defogging processing efficiency and the defogging effect, the improvement measure provided for the defogging efficiency is applied to the preprocessing stage, and the improvement measure of the defogging effect is applied to the condition that the visual inspection is missed.
Carrying out visual target detection on the defogging image set to obtain a target detection set; the method comprises the following steps of performing visual target detection by using a YOLOv3 target detection method based on a convolutional neural network to obtain a target detection set comprising a plurality of visual targets; the visual target is an image of the target vehicle;
analyzing a CAN message in the detection data, screening the CAN message through an ID (identity) and extracting target data, wherein the target data comprise a target ID, a target abscissa and ordinate, a target transverse and longitudinal relative speed, a target motion state and a target scattering sectional area, and storing the target data into a constructed target list by taking the target ID as an index;
in this embodiment, the millimeter wave radar may be an ARS408 radar, the ARS408 radar has a standard CAN interface, a communication network follows the ISO11898-2 standard, CAN bus information is defined by a corresponding DBC file, and the ROS provides a CAN interface ROS _ canopen, which CAN implement communication between the CAN bus and the ROS; with the help of the interface, the communication mechanism of the ROS CAN be utilized, and the user-defined node is used for real-time analysis of the CAN message;
analyzing the obtained CAN message in the ROS, firstly judging the radar state, reading CAN message data with msg.ID being 0x201, wherein msg.data represents the type of the output data of the radar from the 42 th byte, and if the msg.ID is 0x0, the target is not output; if 0x1 indicates that the Object is output, waiting for a message with msg.id 0x60A, which is output only once every measurement period; if the message is received, recording the number of the targets detected in the period by taking the current time as a timestamp, and constructing a target list Object _ lists; after a target list is generated, waiting for a message with msg.ID of 0x60B, and recording target data, wherein the target data comprises a target ID, a target horizontal and vertical coordinate, a target horizontal and vertical relative speed, a target motion state and a target scattering cross section (RCS), and storing the target ID as an index into the constructed target list;
acquiring a timestamp when a message is received, setting the timestamp as a message timestamp, and performing matching screening on a visual target and target data according to the image timestamp and the message timestamp to obtain a matching set; the method comprises the following specific steps:
acquiring an image timestamp and a message timestamp, performing matching screening based on a timestamp close alignment strategy, and arranging and combining visual target and target data corresponding to the image timestamp and the message timestamp which are in accordance with a timestamp range according to time to obtain a matching set;
in the embodiment, a function package integrated under the ROS is used in a timestamp close alignment strategy, timestamp alignment is realized by calling the existing packaged function, and the principle is to search the nearest matching timestamp through an internal self-adaptive algorithm;
the sampling frequency of the selected binocular camera is 60Hz, the refreshing frequency of the millimeter wave radar of the ARS408 is 66ms, the sampling frequency is about 15Hz, and the sampling frequency is slightly slower than that of the camera; when the target information of the millimeter wave radar CAN message is analyzed, firstly, the message with the ID of 0x60A, namely a target list head is waited, the message is only issued once in each measurement period, and if the message is received, a timestamp is recorded and a target list is created; likewise, the camera driving node drives the camera and records the timestamp; the message _ filters function package is a message filter of the ROS, the function of the message _ filters is to subscribe two or more topic messages at the same Time and issue messages with the same timestamp at the same Time, the message _ filters have two synchronization strategies, one is a strategy (Exact Time Policy) with completely aligned timestamps, and the other is a strategy (Approximate Time Policy) with close timestamps, and in the Time matching of the millimeter wave radar and the binocular camera, the alignment strategy with close timestamps is used, so that the radar and each frame of information collected by the camera can be efficiently utilized.
Acquiring a detection target coordinate set of the millimeter wave radar in the matching set and a visual target coordinate set of the camera according to a world coordinate system to perform target matching, and judging whether unmatched targets exist in the detection target coordinate set of the millimeter wave radar and the visual target coordinate set of the camera or not;
if yes, setting the target as a visual missed detection target and generating a missed detection instruction; if not, generating a first identification set; the method comprises the following specific steps:
respectively establishing coordinate systems for the millimeter wave radar and the camera to obtain a millimeter wave radar coordinate system and a camera coordinate system, establishing a relation between the millimeter wave radar coordinate system and the camera coordinate system through a world coordinate system, and establishing a relation between the world coordinate system and the camera coordinate system through a world coordinate system XwYwZw-OwThe specific steps of the establishment comprise:
selecting a position of a rear axle center of a monitoring vehicle, which is 0 away from the ground, as an original point, taking the driving direction of the monitoring vehicle as the positive direction of a z-axis, the upward direction vertical to the ground as the positive direction of a y-axis, and the leftward direction vertical to a central axis of a vehicle body as the positive direction of an x-axis; the method for establishing the millimeter wave radar coordinate system and the camera coordinate system is the same as the method for establishing the world coordinate system, and the difference is that the millimeter wave radar and the camera are respectively corresponding origins; monitoring vehicles in front of the vehicles and in the left and right directions as target vehicles;
the target r measured by the millimeter wave radar is positioned in the coordinate system X of the millimeter wave radarrOrZrIn the plane, and X in the world coordinate systemwOwZwPlane parallel, thus X of millimeter-wave radarrAnd ZrThe coordinate is the y coordinate and the x coordinate obtained by analyzing the CAN message;
(x) in target data measured for millimeter wave radarr,yr) The conversion with the world coordinate system is realized by a first conversion formula which is shown as follows:
Figure BDA0003472971650000101
wherein, the offset of the millimeter wave radar in the world coordinate system is Tr=(Rx,Ry,Rz)T
Acquiring an image physical coordinate system and an image pixel coordinate system according to a camera coordinate system, wherein the coordinate of a target point P in the camera coordinate system is (X)c,Yc,Zc) The unit is meter, and the corresponding target point P coordinate in the image physical coordinate system is (X)c’,Yc') in meters, the coordinates of the corresponding target point P in the image pixel coordinate system are (u, v), the units are pixels, the conversion between the target point and the world coordinate system in the pixel coordinate system is realized by a second conversion formula, which is shown below:
Figure BDA0003472971650000111
where u and v are the abscissa and ordinate of the target point in the image pixel coordinate system, ZcIs the vertical coordinate of the camera coordinate system, f is the focal length of the camera, dx and dy are the horizontal width and the vertical width of a single pixel, u0And v0Is the abscissa and ordinate of the origin of the image physical coordinate system in the image pixel coordinate system, R is a rotation matrix, TcIs a translation vector;
the converted target points in the millimeter wave radar coordinate system and the camera coordinate system are connected, and the relationship between the target data measured by the millimeter wave radar and the corresponding points of the image is as follows:
Figure BDA0003472971650000112
setting the number of targets acquired by the millimeter wave radar as m, and setting the number of targets acquired by the camera as n, and generating the following incidence matrix:
Figure BDA0003472971650000113
wherein d isijIndicating that the millimeter wave radar target and the camera target are in XwOwZwThe in-plane distance, i.e.:
Figure BDA0003472971650000114
wherein, XcwAnd ZcwAs camera target coordinates, XrwAnd ZrwEstablishing a bipartite graph for the target coordinates of the millimeter wave radar based on a KM algorithm, wherein a point set on the bipartite graph is a target obtained by the millimeter wave radar and a camera, and the weight of a connecting line between a point and the point is a distance dijMatching the millimeter wave radar target and the camera target is converted into finding the minimum weight matching of the weighted bipartite graph, and a matched distance list is obtained, wherein the distance list comprises a plurality of elements of the minimum weight matching;
matching each element in the distance list with a preset distance threshold, if the element is larger than the distance threshold, judging that the millimeter wave radar target is unsuccessfully matched with the camera target, and setting the target as a visual missed detection target; setting a preset distance threshold according to big data of the training distance;
acquiring an interested region of the defogged image according to the missing inspection instruction and the visual missing inspection target, and performing local defogging on the interested region to obtain a defogging region;
it should be noted that the region of interest of the defogged image indicates that the target in the region is not represented in the pixel coordinate system, but is represented in the millimeter wave radar coordinate system, the corresponding coordinate on the world coordinate system is obtained according to the coordinate of the visual missed detection target in the millimeter wave radar coordinate system, and the corresponding coordinate on the world coordinate system is converted through a second conversion formula to determine the region of interest on the image pixel coordinate system;
it is worth noting that when the image collected by the camera is not defogged on a sunny day, the scheme in the embodiment can still determine the visual undetected target and the corresponding interested region in the image, so as to achieve local defogging and target determination of the image.
The method comprises the following specific steps:
acquiring a vertical coordinate of a visual undetected target through a millimeter wave radar, determining a coordinate set of an identification frame according to the vertical coordinate, moving left and right according to the vertical coordinate and a preset distance to obtain a plurality of moving horizontal coordinates, multiplying the plurality of moving horizontal coordinates by a correction coefficient to obtain a plurality of processing horizontal coordinates, and determining the coordinate set of the identification frame according to the plurality of processing horizontal coordinates and the vertical coordinate; the preset distance can be half of the width of the vehicle, and the processing coefficient can be 1.1;
converting the coordinate set of the identification frame through a second conversion formula to obtain a matching coordinate set corresponding to the pixel coordinate system of the image, and acquiring an interested area of the defogged image according to the matching coordinate set;
it is noted that only one point displayed by the visual missed detection target is obtained through the millimeter wave radar, and the identification frame corresponding to the visual missed detection target is constructed through the vertical coordinate of the visual missed detection target;
acquiring scene depth of a visual missed inspection target through a millimeter wave radar, acquiring transmissivity corresponding to an interested region, setting the transmissivity as processing transmissivity, and performing local defogging on the interested region through a fog image scattering model by using the corrected atmospheric light value and the processing transmissivity to obtain a defogged region;
it should be noted that the local defogging is the same as the overall defogging scheme of the previous foggy image, and is used for performing secondary defogging on an unidentified region in the defogged image, so that the overall defogging effect of the defogged image is improved;
visual target detection is carried out on the defogging area to obtain a missed detection target set of the defogging area, a missed detection target coordinate set corresponding to the missed detection target set is obtained according to a world coordinate system, secondary target matching is carried out on the missed detection target coordinate set and a detection target coordinate set of the millimeter wave radar, and a second identification set is generated; the first identification set and the second identification set form an identification result; and the secondary target matching mode is the same as the mode of matching the visual target coordinate set with the detection target coordinate set.
The above description is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and embellishments within the scope of the invention may occur to those skilled in the art without departing from the principle of the invention, and are considered to be within the scope of the invention.

Claims (10)

1. A fog-penetrating target identification method for high robustness radar-vision fusion is characterized by comprising the following steps:
acquiring a foggy image set and detection data of a target vehicle, wherein the detection data comprise a plurality of CAN messages, and performing defogging pretreatment on the foggy image set to obtain a defogged image set;
carrying out visual target detection on the defogging image set to obtain a target detection set containing a visual target;
acquiring an image timestamp of a defogged image in the defogged image set and a message timestamp of a received message, and performing matching screening on a visual target and target data according to the image timestamp and the message timestamp to obtain a matching set;
acquiring a detection target coordinate set of the millimeter wave radar in the matching set and a visual target coordinate set of the camera according to a world coordinate system to perform target matching, and judging whether unmatched targets exist in the detection target coordinate set of the millimeter wave radar and the visual target coordinate set of the camera or not;
if yes, setting the target as a visual missed detection target and generating a missed detection instruction; if not, generating a first identification set;
and acquiring an interested region of the defogged image according to the missing detection instruction and the visual missing detection target, carrying out local defogging on the interested region to obtain a defogging region, carrying out secondary visual target detection and target matching on the defogging region, and outputting a second identification set.
2. The method for identifying the fog-penetrating target of the high-robustness radar vision fusion as recited in claim 1, wherein a CAN message in the detection data is analyzed, the CAN message is screened through an ID, and target data is extracted, wherein the target data comprises a target ID, a target abscissa and an object ordinate, a target transverse and longitudinal relative speed, a target motion state and a target scattering sectional area, and the target ID is used as an index to store the target data into a constructed target list.
3. The method for identifying the fog-penetrating target through the high-robustness laser vision fusion as claimed in claim 2, wherein the specific step of performing the defogging preprocessing on the fog image set comprises the following steps:
simulating the foggy image set through the simulation environment to obtain the average transmittance of the foggy images in the foggy image set;
matching the average transmissivity with a preset fog penetration threshold value, and setting a fog image corresponding to the average transmissivity lower than the fog penetration threshold value as a selected image;
and analyzing the selected image to judge whether the atmospheric light value needs to be estimated or not.
4. The method for identifying the fog-penetrating target through the high-robustness laser vision fusion as claimed in claim 3, wherein the specific step of estimating the atmospheric light value comprises the following steps:
obtaining atmospheric light values of a plurality of selected images, obtaining binary differential values of the atmospheric light values of two adjacent frames of images based on a three-frame differential method, and analyzing intersection of the binary differential values;
when the intersection result is 1, judging that the atmospheric light value error is large and re-estimating the atmospheric light value;
and when the intersection result is not 1, judging that the atmospheric light value error is small and setting the atmospheric light value as a corrected atmospheric light value.
5. The fog-penetrating target recognition method for high robustness laser-vision fusion is characterized in that the corrected transmittance of the selected image is obtained, and the corrected atmospheric light value and the corrected transmittance are trained through a foggy image scattering model to obtain a defogged image; and combining the defogged images according to time sequence to obtain a defogged image set.
6. The method for identifying the fog-penetrating target based on the fusion of the highly robust radar vision and the radar vision as recited in claim 5, wherein the visual target detection is performed by a convolutional neural network-based Yolov3 target detection method.
7. The method for identifying the fog-penetrating target of the high-robustness laser-vision fusion as recited in claim 6, wherein the specific steps of matching and screening the visual target and the target data according to the image timestamp and the message timestamp comprise:
and acquiring an image timestamp and a message timestamp, matching and screening based on a timestamp close alignment strategy, and arranging and combining visual target and target data corresponding to the image timestamp and the message timestamp which accord with the timestamp range according to time to obtain a matching set.
8. The method for identifying the fog-penetrating target through the high-robustness laser vision fusion is characterized in that a millimeter wave radar coordinate system, a camera coordinate system and a world coordinate system are established, and the millimeter wave radar coordinate system is connected with the camera coordinate system through the world coordinate system;
performing coordinate conversion on a target coordinate in target data measured by a millimeter wave radar and a world coordinate system to obtain a first conversion coordinate; performing coordinate conversion on the target coordinate of the camera coordinate system and the world coordinate system to obtain a second conversion coordinate;
establishing a bipartite graph for a plurality of first conversion coordinates and second conversion coordinates based on a KM algorithm, respectively performing minimum weight matching on a plurality of points on the bipartite graph and acquiring a distance list;
and matching each element in the distance list with a preset distance threshold, if the element is greater than the distance threshold, judging that the millimeter wave radar target is unsuccessfully matched with the camera target, and setting the target as a visual missed inspection target.
9. The fog-penetrating target identification method for the high-robustness laser-vision fusion is characterized in that a region of interest of a defogged image is acquired according to a visual omission target;
acquiring a vertical coordinate of the visual undetected target through a millimeter wave radar, and determining a coordinate set of the identification frame according to the vertical coordinate; converting the coordinate set of the identification frame to obtain a matching coordinate set corresponding to a camera coordinate system, and acquiring an interested area of the defogged image according to the matching coordinate set;
acquiring scene depth of a visual missed inspection target through a millimeter wave radar, acquiring transmissivity corresponding to an interested region, setting the transmissivity as processing transmissivity, and performing local defogging on the interested region through a fog image scattering model by using the corrected atmospheric light value and the processing transmissivity to obtain a defogged region; the scene depth is the distance between the monitoring vehicle and the target vehicle.
10. The method for identifying the fog-penetrating target through the high robustness laser-vision fusion as claimed in claim 9, wherein the specific step of determining the coordinate set of the identification frame according to the ordinate comprises: and moving left and right according to the ordinate and a preset distance to obtain a plurality of moving abscissas, multiplying the plurality of moving abscissas by the correction coefficient to obtain a plurality of processing abscissas, and determining a coordinate set of the identification frame according to the plurality of processing abscissas and the ordinate.
CN202210047699.1A 2022-01-17 2022-01-17 Fog-penetrating target identification method for high-robustness laser-vision fusion Pending CN114415173A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210047699.1A CN114415173A (en) 2022-01-17 2022-01-17 Fog-penetrating target identification method for high-robustness laser-vision fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210047699.1A CN114415173A (en) 2022-01-17 2022-01-17 Fog-penetrating target identification method for high-robustness laser-vision fusion

Publications (1)

Publication Number Publication Date
CN114415173A true CN114415173A (en) 2022-04-29

Family

ID=81272680

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210047699.1A Pending CN114415173A (en) 2022-01-17 2022-01-17 Fog-penetrating target identification method for high-robustness laser-vision fusion

Country Status (1)

Country Link
CN (1) CN114415173A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114998886A (en) * 2022-08-04 2022-09-02 智慧互通科技股份有限公司 Vehicle tracking method and device based on radar vision fusion
CN115014366A (en) * 2022-05-31 2022-09-06 中国第一汽车股份有限公司 Target fusion method and device, vehicle and storage medium
CN117475207A (en) * 2023-10-27 2024-01-30 江苏星慎科技集团有限公司 3D-based bionic visual target detection and identification method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115014366A (en) * 2022-05-31 2022-09-06 中国第一汽车股份有限公司 Target fusion method and device, vehicle and storage medium
CN114998886A (en) * 2022-08-04 2022-09-02 智慧互通科技股份有限公司 Vehicle tracking method and device based on radar vision fusion
CN114998886B (en) * 2022-08-04 2022-10-28 智慧互通科技股份有限公司 Vehicle tracking method and device based on radar vision fusion
CN117475207A (en) * 2023-10-27 2024-01-30 江苏星慎科技集团有限公司 3D-based bionic visual target detection and identification method

Similar Documents

Publication Publication Date Title
CN114415173A (en) Fog-penetrating target identification method for high-robustness laser-vision fusion
CN106709901B (en) Simulation mist drawing generating method based on depth priori
CN112017243B (en) Medium visibility recognition method
CN112731436B (en) Multi-mode data fusion travelable region detection method based on point cloud up-sampling
US10861172B2 (en) Sensors and methods for monitoring flying objects
CN116309781B (en) Cross-modal fusion-based underwater visual target ranging method and device
CN111832410B (en) Forward train detection method based on fusion of vision and laser radar
CN112365497A (en) High-speed target detection method and system based on Trident Net and Cascade-RCNN structures
CN112365467B (en) Foggy image visibility estimation method based on single image depth estimation
CN113744315B (en) Semi-direct vision odometer based on binocular vision
CN112613387A (en) Traffic sign detection method based on YOLOv3
CN116978009A (en) Dynamic object filtering method based on 4D millimeter wave radar
CN110728269B (en) High-speed rail contact net support pole number plate identification method based on C2 detection data
CN114966696A (en) Transformer-based cross-modal fusion target detection method
CN115240089A (en) Vehicle detection method of aerial remote sensing image
CN111009136A (en) Method, device and system for detecting vehicles with abnormal running speed on highway
CN114814827A (en) Pedestrian classification method and system based on 4D millimeter wave radar and vision fusion
CN117746134A (en) Tag generation method, device and equipment of detection frame and storage medium
CN112016558A (en) Medium visibility identification method based on image quality
CN112198483A (en) Data processing method, device and equipment for satellite inversion radar and storage medium
CN116385336B (en) Deep learning-based weld joint detection method, system, device and storage medium
CN115359094B (en) Moving target detection method based on deep learning
CN116703864A (en) Automatic pavement disease detection system and method based on polarization imaging and integrated learning
CN112883846A (en) Three-dimensional data acquisition imaging system for detecting vehicle front target
CN112818787A (en) Multi-target tracking method fusing convolutional neural network and feature similarity learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination