CN114155501A - Target detection method of unmanned vehicle in smoke shielding environment - Google Patents

Target detection method of unmanned vehicle in smoke shielding environment Download PDF

Info

Publication number
CN114155501A
CN114155501A CN202111465540.3A CN202111465540A CN114155501A CN 114155501 A CN114155501 A CN 114155501A CN 202111465540 A CN202111465540 A CN 202111465540A CN 114155501 A CN114155501 A CN 114155501A
Authority
CN
China
Prior art keywords
millimeter wave
wave radar
target
detection
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111465540.3A
Other languages
Chinese (zh)
Inventor
熊光明
孙冬
胡秀中
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202111465540.3A priority Critical patent/CN114155501A/en
Publication of CN114155501A publication Critical patent/CN114155501A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Remote Sensing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses a target detection method of an unmanned vehicle in a smoke shielding environment, and belongs to the field of unmanned vehicle perception. Based on the infrared camera and millimeter wave radar combined calibration method, the invention filters a large number of invalid targets through the millimeter wave radar validity check method, reduces the image detection range through the image interesting region extraction method of the safety boundary, and fuses the infrared image detection method to obtain the position and the speed of the target in front of the unmanned vehicle. According to the invention, the calibration precision of the infrared camera and the millimeter wave radar is improved by optimizing the homography matrix; effective vehicle targets are extracted by checking the effectiveness of the millimeter wave radar targets, and the processing amount of fusion data is reduced; the robustness is improved through a distance-based self-adaptive method; by optimizing the neural network, training parameters are reduced, and the detection speed is improved; the problem of insufficient infrared samples is solved by adopting a transfer learning method; and finally, the target is accurately detected in the smoke environment, and the sensing capability of the unmanned vehicle is enhanced.

Description

Target detection method of unmanned vehicle in smoke shielding environment
Technical Field
The invention relates to a target detection method of an unmanned vehicle in a smoke shielding environment, and belongs to the field of unmanned vehicle perception.
Background
The unmanned technology is widely noticed by various nationalities as a leading-edge technology of the current vehicle, and the perception technology of the unmanned vehicle is one of the most important technologies of the unmanned vehicle. With the investment of a great deal of research capital and research energy in the field of urban automatic driving in recent years, the research on the perception technology of the unmanned vehicle also obtains great results, and the existing unmanned vehicle perception system is provided with sensors such as a laser radar, a millimeter wave radar, a visible light camera and the like.
However, in a field battlefield environment, unmanned vehicles face many challenges, mainly manifested by the susceptibility of some sensors equipped with the sensing system to failure when the unmanned vehicle is in a smoke environment: when the laser radar and the visible light camera are in a smoke shielding environment, the interference is large, accurate target information cannot be provided, and the sensing capability of a sensing system is greatly influenced; although the millimeter wave radar is not interfered by smoke, a large number of invalid targets exist in the output of the millimeter wave radar, so that the position and the speed of a real target cannot be judged by a perception system of the unmanned vehicle.
Therefore, the sensing technology of the unmanned vehicle in the smoke-shielded environment is still a difficult point and a hot point of the unmanned vehicle, and it is urgently needed to improve the adaptability and the anti-interference capability of the environment sensing system in the smoke environment.
Disclosure of Invention
The invention aims to provide a target detection method of an unmanned vehicle in a smoke shielding environment, and solves the problem that the target detection capability of the unmanned vehicle in the smoke environment is weak.
The invention is realized by the following specific technical scheme:
based on the combined calibration of an infrared camera and a millimeter wave radar, a large number of invalid targets are filtered by a millimeter wave radar validity check method, the image detection range is narrowed by an image interesting region extraction method of a safety boundary, and the position and the speed of a target in front of an unmanned vehicle are obtained by fusing an infrared image detection method;
the invention discloses a target detection method of an unmanned vehicle in a smoke shielding environment, which comprises the following steps:
the method comprises the following steps of firstly, calibrating an infrared camera and a millimeter wave radar in a combined mode, and specifically comprises the following substeps:
step 1.1, establishing a projection relation between an infrared camera image and a millimeter wave radar detection plane coordinate system:
coordinate system of infrared camera is Oc-XcYcZcThe infrared camera is arranged at the origin of coordinates OcPixel coordinate system O of infrared camera0Uv and plane XcOcYcParallel connection;
coordinate system of millimeter wave radar is Or-XrYrZrThe millimeter wave radar is located at the origin of coordinates OrTo a millimeter wave radar detection target point riThe position in the coordinate system of the millimeter wave radar is (x)i,yi,zi) Since the millimeter wave radar detection target has no altitude data, the position of the millimeter wave radar detection target is (x)i,yi)。
The coordinate conversion relation between the infrared camera coordinate system and the millimeter wave radar coordinate system is shown as the formula (1):
Figure BDA0003391278440000021
where ω is a scale factor and H is a homography representing the coordinate transformation relationship between two planes, is oneA 3 × 3 matrix, (u)i,vi) The position coordinates of the millimeter wave radar target in an infrared camera pixel coordinate system are obtained;
step 1.2, marking targets detected by an infrared camera and a millimeter wave radar simultaneously to generate marking data;
step 1.3, determining a homography matrix H according to the marking data generated in the step 1.2;
preferably, a random Sample Consensus (RANSAC) method is adopted to determine the initial value of the homography matrix H, and a Levenberg-Marquardt (LM) method is optimized for the homography matrix H;
step two, extracting the effective millimeter wave radar target, and specifically comprising the following substeps:
step 2.1, primarily screening targets detected by the millimeter wave radar, filtering empty signals, and primarily selecting effective targets;
when the number of the detected targets is less than that of the detection targets, null signals appear, numerical values of relative speed, relative distance and relative angle of the null signals are system default values, and when the numerical values of the relative speed, relative distance and relative angle which accord with the radar target information in the obtained radar target information are the system default values, the null signals are screened and filtered;
step 2.2, periodically screening the targets detected by the millimeter wave radar, and filtering out interference signals caused by clutter interference;
due to clutter interference of the surrounding environment, the millimeter wave radar detects discontinuous interference signals, the existence time of the signals is extremely short, and no actual object exists in the corresponding position;
when the millimeter wave radar is detected, the identity of an effective target can be stably kept unchanged for a long time, while the identity of an interference target can generate more times of jumping in a short time, and the validity of the detected target is judged by the formula (2) in combination with the target history information output by the millimeter wave radar;
Figure BDA0003391278440000031
where n is the sampling period number, n is the (1, 2, 3, …) and di、θiAnd viThe relative distance, relative angle and relative speed of the ith detection target, dthresh、θthreshAnd vthreshRespectively, the distance, relative angle and relative speed change threshold values within the sampling period interval;
as long as the detection target meets a condition in the formula (2), determining as an invalid target and removing the detection target from the detection target of the millimeter wave radar;
step 2.3, clustering dense millimeter wave radar detection targets, and improving the detection speed and precision:
when an object detected in front of the millimeter wave radar is close, the object can generate a plurality of echoes, and the feedback output result is a plurality of targets;
preferably, an improved Noise-Based Density Clustering method (DBSCAN) is adopted, as shown in formula (3), distance constraint of a detection target is added, global unified parameters are avoided, and therefore Clustering accuracy is improved;
Figure BDA0003391278440000032
wherein d is the distance of the target point in the millimeter wave radar coordinate system, minPts is a fixed value of the neighborhood density threshold value, minPtsiA neighborhood density threshold value of the ith point is set, and D is the maximum variation range of the domain density value;
dividing a high-density area into clusters by using the improved DBSCAN method, and outputting the clusters as millimeter wave radar detection targets;
step three, extracting the image interesting region considering the safety factor, which specifically comprises the following substeps:
step 3.1, determining the central point of the image region of interest:
taking the projection point of the millimeter wave radar point on the infrared image as the central point of the region of interest according to the corresponding relation between the infrared camera imaging plane and the millimeter wave radar detection plane determined in the step one;
step 3.2, determining a basic rectangular frame:
projecting the image into an infrared image coordinate system according to the recommendation requirement of the international standard on the aspect ratio of the appearance of the automobile in the overall dimension of the road vehicle and the similarity principle to obtain a basic rectangular frame in the infrared image;
step 3.3, extracting an image region of interest:
extracting edge points of all basic rectangular frames to form a new rectangle, considering the condition that the rectangular frame can not contain all parts of the vehicle, setting a safety factor s for reducing missing inspection, wherein s is more than 0 and less than 1, and respectively expanding s to the length direction and the width direction of the new rectangle to form an interested region;
step four, model construction and training:
step 4.1, model construction:
the network is extracted into a MobileNetv2 network by replacing CSPDarknet53 main features of YOLOv4(You Only Look one), so that network parameters are reduced, and the training and detection speed of the network is improved;
step 4.2 model training preparation:
in the training stage, the infrared image is augmented, and is subjected to Gaussian blur and sharpening processing to simulate the conditions under different environments and different imaging qualities;
further, the augmentation method comprises the following steps: performing geometric transformation on the infrared image, and simulating the imaging condition of a camera at different viewing angles;
further, the geometric transformation of the infrared image includes: translation, rotation and mirroring for simulating the situation of the camera at different distances and entering directions;
step 4.3, model training:
in the training stage, training the network by using a transfer learning method, training the MobileNetv2 network by using a visible light image set as pre-training data to obtain a primary weight parameter, and adjusting the training weight by using a small amount of infrared images to obtain a final training weight;
step five, detecting the region of interest;
and in the detection stage, extracting and predicting the characteristics of the input interested infrared image according to the model constructed and trained in the step four, and then combining the position and speed information of the millimeter wave radar with all detection frames to finally generate the position and speed information of the detection target.
Has the advantages that:
1. the invention discloses a target detection method of an unmanned vehicle in a smoke shielding environment, which describes the relation between a millimeter wave radar detection plane and an infrared camera imaging plane through a homography matrix, and improves the calibration precision of the infrared camera and the millimeter wave radar by optimizing the homography matrix;
2. according to the target detection method of the unmanned vehicle in the smoke shielding environment, disclosed by the invention, the validity of a millimeter wave radar target is checked, the processing amount of fusion data is reduced, and a distance-based adaptive method is used for extracting an effective vehicle target and improving the robustness of the detection method;
3. the invention discloses a target detection method of an unmanned vehicle in a smoke shielding environment, which optimizes a YOLOv4 network by introducing a MobileNetv2 backbone network, reduces training parameters, accelerates the detection speed, solves the problem of insufficient infrared images by adopting a transfer learning method, and improves the performance of the detection method;
4. the target detection method of the unmanned vehicle in the smoke shielding environment, disclosed by the invention, can be used for finally realizing accurate target detection in the smoke environment and enhancing the perception capability of the unmanned vehicle.
Drawings
FIG. 1 is a schematic diagram of an implementation scenario of an embodiment of the present invention;
FIG. 2 is an overall flow diagram of an embodiment of the present invention;
FIG. 3 is a diagram of coordinate system definition of the detection method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an embodiment of the present invention for extracting millimeter wave radar effective targets;
FIG. 5 is a schematic diagram of a network structure for infrared image detection according to an embodiment of the present invention;
FIG. 6 is a schematic diagram illustrating an infrared image transformation effect according to an embodiment of the present invention;
wherein fig. 6(a) is an original infrared image, fig. 6(b) is an infrared image after translation transformation, fig. 6(c) is an infrared image after rotation transformation, fig. 6(d) is an infrared image after mirror image, fig. 6(e) is an infrared image after blurring processing, and fig. 6(f) is an infrared image after brightness transformation;
FIG. 7 is a schematic diagram of a model training process according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a result of fusion detection of infrared and millimeter wave radar in accordance with an embodiment of the present invention;
fig. 8(a) shows an original visible light image, and fig. 8(b) shows an infrared image and a detection result.
Detailed Description
For a better understanding of the objects and advantages of the present invention, reference should be made to the following detailed description taken in conjunction with the accompanying drawings and examples.
Example 1:
when an unmanned vehicle runs in a smoke environment as shown in fig. 1, the visible light camera and the laser radar equipped in the unmanned vehicle sensing system can be disabled due to the blocking of smoke. In order to detect a target in front of the unmanned vehicle and avoid the target, the target detection method of the unmanned vehicle in the smoke shielding environment is applied in the embodiment to detect the target appearing in front;
as shown in fig. 2, the target detection method of the unmanned vehicle in the smoke-shielded environment disclosed in this embodiment includes the following main processes of joint calibration, extraction of an effective target of a millimeter wave radar, extraction of an image region of interest, data enhancement, model training, and result output, and specifically includes the following steps:
the method comprises the following steps of firstly, calibrating an infrared camera and a millimeter wave radar in a combined mode, and specifically comprises the following substeps:
step 1.1, establishing a projection relation between an infrared camera image and a millimeter wave radar detection plane coordinate system:
as shown in fig. 3, the coordinate system of the infrared camera isOc-XcYcZcThe infrared camera is arranged at the origin of coordinates OcPixel coordinate system O of infrared camera0Uv and plane XcOcYcParallel connection;
as shown in FIG. 3, the coordinate system of the millimeter-wave radar is Or-Xr,YrZrThe millimeter wave radar is located at the origin of coordinates OrTo a millimeter wave radar detection target point riThe position in the coordinate system of the millimeter wave radar is (x)i,yi,zi) Since the millimeter wave radar detection target has no altitude data, the position of the millimeter wave radar detection target is (x)i,yi)。
As shown in FIG. 3, the millimeter wave radar is only on Or-XrYrPlane detection, the 2D coordinate of the obstacle object output, i.e. zi0, the millimeter wave radar target is at the point c of the pixel coordinate system of the infrared cameraiThe position is (u)i,vi) The coordinate conversion relation between the coordinate system of the infrared camera and the coordinate system of the millimeter wave radar is shown as the formula (1):
Figure BDA0003391278440000061
where ω is a scale factor, H is a homography matrix representing a coordinate transformation relationship between two planes, and is a 3 × 3 matrix as shown in equation (2), (u) isi,vi) The position coordinates of the millimeter wave radar target in an infrared camera pixel coordinate system are obtained;
Figure BDA0003391278440000062
wherein h is11、h12、h13… are the internal elements of the homography matrix H;
step 1.2, marking targets detected by an infrared camera and a millimeter wave radar simultaneously to generate marking data;
step 1.3, determining a homography matrix H according to the marking data generated in the step 1.2;
in an embodiment, a parameter vector is first defined
Figure BDA0003391278440000063
Wherein r isi,pThe estimated value of the millimeter wave radar point coordinate is H, and H is a vector formed by elements of a homography matrix H;
further, the parameter vector P is mapped to
Figure BDA0003391278440000064
Wherein,
Figure BDA0003391278440000065
definition of
Figure BDA0003391278440000066
The Jacobi matrix is simplified as shown in formula (3):
Figure BDA0003391278440000071
then, the initial value of the homography matrix H is determined by using a random Sample Consensus method (RANSAC).
Further, the homography matrix H is optimized by the LM method, i.e., J δ ═ e, where δ is the increment [ δ [ ]a δb1 δb2 … δbn]TMatrix, ε is error [ ε ]1 … εn]TA matrix;
wherein epsiloni=d(ri,H-1ci)+d(ci,Hri) Wherein d is(x1,x2) Denotes x1,x2The Euclidean distance between two points;
then, optimizing the homography matrix H;
step two, extracting the effective millimeter wave radar target, and specifically comprising the following substeps:
step 2.1, primarily screening targets detected by the millimeter wave radar, filtering empty signals, and primarily selecting effective targets;
when the number of the detected targets is less than that of the detection targets, null signals appear, numerical values of relative speed, relative distance and relative angle of the null signals are system default values, and when the numerical values of the relative speed, relative distance and relative angle which accord with the radar target information in the obtained radar target information are the system default values, the null signals are screened and filtered;
step 2.2, periodically screening the targets detected by the millimeter wave radar, and filtering out interference signals caused by clutter interference;
due to clutter interference of the surrounding environment, the millimeter wave radar detects discontinuous interference signals, the existence time of the signals is extremely short, and no actual object exists in the corresponding position;
when the millimeter wave radar is detected, the identity of an effective target can be stably kept unchanged for a long time, while the identity of an interference target can generate more times of jumping in a short time, and the validity of the detected target is judged by the formula (4) in combination with the target history information output by the millimeter wave radar;
Figure BDA0003391278440000072
where n is the sampling period number, n is the (1, 2, 3, …) and di、θiAnd viThe relative distance, relative angle and relative speed of the ith detection target, dthresh、θthreshAnd vthreshRespectively, the distance, relative angle and relative speed change threshold values within the sampling period interval;
as long as the detection target meets a condition in the formula (4), determining as an invalid target and removing the detection target from the detection target of the millimeter wave radar;
step 2.3, clustering dense millimeter wave radar detection targets, and improving the detection speed and precision:
when an object detected in front of the millimeter wave radar is close, the object can generate a plurality of echoes, and the feedback output result is a plurality of targets;
preferably, an improved Noise-Based Density Clustering method (DBSCAN) is adopted, as shown in formula (5), distance constraint of a detection target is added, global unified parameters are avoided, and therefore Clustering accuracy is improved;
Figure BDA0003391278440000081
wherein d is the distance of the target point in the millimeter wave radar coordinate system, minPts is a fixed value of the neighborhood density threshold value, minPtsiA neighborhood density threshold value of the ith point is set, and D is the maximum variation range of the domain density value;
in examples D ═ 50, minPts ═ 4;
the improved DBSCAN method divides a high-density area into clusters, and outputs the clusters as millimeter wave radar detection targets, as shown in FIG. 4;
step three, extracting the image interesting region considering the safety factor, which specifically comprises the following substeps:
step 3.1, determining the central point of the image region of interest:
taking the projection point of the millimeter wave radar point on the infrared image as the central point of the region of interest according to the corresponding relation between the infrared camera imaging plane and the millimeter wave radar detection plane determined in the step one;
step 3.2, determining a basic rectangular frame:
according to the recommended requirements of the national standard on the aspect ratio of the vehicle in the road vehicle overall dimension, the vehicle width and height are respectively limited to 2.55m and 4m, a rectangle is drawn according to the width and height dimension, and the rectangle is projected into an infrared image coordinate system according to the similarity principle to obtain a basic rectangular frame in the infrared image;
step 3.3, extracting an image region of interest:
extracting edge points of all basic rectangular frames to form a new rectangle, considering the condition that the rectangular frame can not contain all parts of the vehicle, setting a safety factor s for reducing missing inspection, wherein s is more than 0 and less than 1, and respectively expanding s to the length direction and the width direction of the new rectangle to form an interested region;
step four, model construction and training:
step 4.1, model construction:
as shown in fig. 5, the CSPDarknet53 main feature extraction Network replacing YOLOv4 (young Only Look Once) is a MobileNetv2 Network, the MobileNetv2 main component is depth separable convolution and residual, the MobileNetv2 main Network outputs data of three different scales and dimensions, the data of the minimum scale layer is maximally pooled by a Pyramid Pooling Structure (SPP), the data of the three layers are input into a feature Aggregation Network (PANet) module, image features are further extracted, and finally a detection result is output through YOLO detection;
step 4.2 model training preparation:
in the training stage, the infrared image is augmented, and is subjected to Gaussian blur and sharpening processing to simulate the conditions under different environments and different imaging qualities;
further, the augmentation method comprises the following steps: performing geometric transformation on the infrared image, and simulating the imaging condition of a camera at different viewing angles;
further, the geometric transformation of the infrared image includes: translation, rotation and mirroring for simulating the situation of the camera at different distances and entering directions;
as shown in fig. 6, the original infrared image and the infrared image after the infrared image is subjected to translation, rotation, mirror image, gaussian blur, and sharpening are respectively obtained;
step 4.3, model training:
in an embodiment, based on the public dataset and the acquired infrared image, 6: 2: 2, data segmentation is respectively used for training, verifying and testing;
as shown in fig. 7, in the training stage, the network is trained by using a transfer learning method, a visible light image set is used as pre-training data to train the MobileNetv2 network, a preliminary weight parameter is obtained, and a small amount of infrared images are used to adjust the training weight;
in order to accelerate the training speed of the model and obtain a better training effect, a freezing training strategy is adopted, a front 161-layer network is frozen at first, 16 pictures are processed in one batch, and the maximum iteration number is 50;
in the network training stage, asynchronous gradient descent is adopted, the initial learning rate is set to be 0.001, and when the loss value is detected to descend slowly, the learning rate is reduced according to the attenuation coefficient of 0.0005;
after one round of training, obtaining a preliminary weight, unfreezing all networks, and training by taking 8 pictures as one batch, wherein the maximum iteration number is 50;
the second round of training also uses an asynchronous gradient descent strategy, and if the loss function cannot be reduced by reducing the learning rate, the training is considered to be completed, and the final training weight is obtained.
Step five, detecting the region of interest;
in the detection stage, extracting characteristics of the input interested infrared image and predicting according to the model built and trained in the step four, and then combining the position and speed information of the millimeter wave radar for all detection frames to finally generate the position and speed information of the detection target;
the detection result of the embodiment is shown in fig. 8, where fig. 8(a) is a picture taken by a visible light camera, and fig. 8(b) is a fusion detection result of an infrared camera and a millimeter wave radar;
as can be seen from fig. 8(a), as shown in fig. 8(a), in a smoke-shielded environment, the visible light camera cannot determine whether there is a target in front of the unmanned vehicle, and the method of the present invention can detect the target and acquire the position and relative speed information of the target, as shown in fig. 8 (b).
The above detailed description is intended to illustrate the objects, aspects and advantages of the present invention, and it should be understood that the above detailed description is only exemplary of the present invention and is not intended to limit the scope of the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (4)

1. A target detection method of an unmanned vehicle in a smoke shielding environment is characterized by comprising the following steps: based on the combined calibration of an infrared camera and a millimeter wave radar, a large number of invalid targets are filtered by a millimeter wave radar validity check method, the image detection range is narrowed by an image interesting region extraction method of a safety boundary, and the position and the speed of a target in front of the unmanned vehicle are obtained by fusing an infrared image detection method.
2. The method for detecting the target of the unmanned vehicle in the smoke-shielded environment according to claim 1, wherein the method comprises the following steps: the method comprises the following steps:
the method comprises the following steps of firstly, calibrating an infrared camera and a millimeter wave radar in a combined mode, and specifically comprises the following substeps:
step 1.1, establishing a projection relation between an infrared camera image and a millimeter wave radar detection plane coordinate system:
coordinate system of infrared camera is Oc-XcYcZcThe infrared camera is arranged at the origin of coordinates OcPixel coordinate system O of infrared camera0Uv and plane XcOcYcParallel connection;
coordinate system of millimeter wave radar is Or-XrrZrThe millimeter wave radar is located at the origin of coordinates OrTo a millimeter wave radar detection target point riThe position in the coordinate system of the millimeter wave radar is (x)i,yi,zi) Since the millimeter wave radar detection target has no altitude data, the position of the millimeter wave radar detection target is (x)i,yi);
The coordinate conversion relation between the infrared camera coordinate system and the millimeter wave radar coordinate system is shown as the formula (1):
Figure FDA0003391278430000011
where ω is a scale factor, H is a homography matrix representing the coordinate transformation relationship between two planes, and is a 3 × 3 matrix, (u)i,vi) The position coordinates of the millimeter wave radar target in an infrared camera pixel coordinate system are obtained;
step 1.2, marking targets detected by an infrared camera and a millimeter wave radar simultaneously to generate marking data;
step 1.3, determining a homography matrix H according to the marking data generated in the step 1.2;
step two, extracting the effective millimeter wave radar target, and specifically comprising the following substeps:
step 2.1, primarily screening targets detected by the millimeter wave radar, filtering empty signals, and primarily selecting effective targets;
when the number of the detected targets is less than that of the detection targets, null signals appear, numerical values of relative speed, relative distance and relative angle of the null signals are system default values, and when the numerical values of the relative speed, relative distance and relative angle which accord with the radar target information in the obtained radar target information are the system default values, the null signals are screened and filtered;
step 2.2, periodically screening the targets detected by the millimeter wave radar, and filtering out interference signals caused by clutter interference;
due to clutter interference of the surrounding environment, the millimeter wave radar detects discontinuous interference signals, the existence time of the signals is extremely short, and no actual object exists in the corresponding position;
when the millimeter wave radar is detected, the identity of an effective target can be stably kept unchanged for a long time, while the identity of an interference target can generate more times of jumping in a short time, and the validity of the detected target is judged by the formula (2) in combination with the target history information output by the millimeter wave radar;
Figure FDA0003391278430000021
where n is the sampling period number, n is the (1, 2, 3, …) and di、θiAnd viThe relative distance, relative angle and relative speed of the ith detection target, dthresh、θthreshAnd vthreshRespectively, the distance, relative angle and relative speed change threshold values within the sampling period interval;
as long as the detection target meets a condition in the formula (2), determining as an invalid target and removing the detection target from the detection target of the millimeter wave radar;
step 2.3, clustering dense millimeter wave radar detection targets, and improving the detection speed and precision:
when an object detected in front of the millimeter wave radar is close, the object can generate a plurality of echoes, and the feedback output result is a plurality of targets;
dividing the high-density area into clusters, and outputting the clusters as millimeter wave radar detection targets;
step three, extracting the image interesting region considering the safety factor, which specifically comprises the following substeps:
step 3.1, determining the central point of the image region of interest:
taking the projection point of the millimeter wave radar point on the infrared image as the central point of the region of interest according to the corresponding relation between the infrared camera imaging plane and the millimeter wave radar detection plane determined in the step one;
step 3.2, determining a basic rectangular frame:
projecting the image into an infrared image coordinate system according to the recommendation requirement of the international standard on the aspect ratio of the appearance of the automobile in the overall dimension of the road vehicle and the similarity principle to obtain a basic rectangular frame in the infrared image;
step 3.3, extracting an image region of interest:
extracting edge points of all basic rectangular frames to form a new rectangle, considering the condition that the rectangular frame can not contain all parts of the vehicle, setting a safety factor s for reducing missing inspection, wherein s is more than 0 and less than 1, and respectively expanding s to the length direction and the width direction of the new rectangle to form an interested region;
step four, model construction and training:
step 4.1, model construction:
the network is extracted into a MobileNetv2 network by replacing CSPDarknet53 main features of YOLOv4(You Only Look one), so that network parameters are reduced, and the training and detection speed of the network is improved;
step 4.2 model training preparation:
in the training stage, the infrared image is augmented, and is subjected to Gaussian blur and sharpening processing to simulate the conditions under different environments and different imaging qualities;
further, the augmentation method comprises the following steps: performing geometric transformation on the infrared image, and simulating the imaging condition of a camera at different viewing angles;
further, the geometric transformation of the infrared image includes: translation, rotation and mirroring for simulating the situation of the camera at different distances and entering directions;
step 4.3, model training:
in the training stage, training the network by using a transfer learning method, training the MobileNetv2 network by using a visible light image set as pre-training data to obtain a primary weight parameter, and adjusting the training weight by using a small amount of infrared images to obtain a final training weight;
step five, detecting the region of interest;
and in the detection stage, extracting and predicting the characteristics of the input interested infrared image according to the model constructed and trained in the step four, and then combining the position and speed information of the millimeter wave radar with all detection frames to finally generate the position and speed information of the detection target.
3. The method for detecting the target of the unmanned vehicle in the smoke-shielded environment as claimed in claim 1 or 2, wherein: in step 1.3, an initial value of the homography matrix H is determined by using a random Sample Consensus (RANSAC), and the homography matrix H is optimized by using a Levenberg-Marquardt method (LM).
4. The method for detecting the target of the unmanned vehicle in the smoke-shielded environment as claimed in claim 1 or 2, wherein: in the step 2.3, an improved Noise-Based Density Clustering method (DBSCAN) is adopted, as shown in the formula (3), distance constraint of a detection target is added, and global unified parameters are avoided, so that the Clustering accuracy is improved;
Figure FDA0003391278430000031
wherein d is the distance of the target point in the millimeter wave radar coordinate system, minPts is a fixed value of the neighborhood density threshold value, minPtsiAnd D is the maximum variation range of the domain density value.
CN202111465540.3A 2021-12-03 2021-12-03 Target detection method of unmanned vehicle in smoke shielding environment Pending CN114155501A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111465540.3A CN114155501A (en) 2021-12-03 2021-12-03 Target detection method of unmanned vehicle in smoke shielding environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111465540.3A CN114155501A (en) 2021-12-03 2021-12-03 Target detection method of unmanned vehicle in smoke shielding environment

Publications (1)

Publication Number Publication Date
CN114155501A true CN114155501A (en) 2022-03-08

Family

ID=80455978

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111465540.3A Pending CN114155501A (en) 2021-12-03 2021-12-03 Target detection method of unmanned vehicle in smoke shielding environment

Country Status (1)

Country Link
CN (1) CN114155501A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115019276A (en) * 2022-06-30 2022-09-06 南京慧尔视智能科技有限公司 Target detection method, system and related equipment
CN116129292A (en) * 2023-01-13 2023-05-16 华中科技大学 Infrared vehicle target detection method and system based on few sample augmentation
CN116342708A (en) * 2022-12-05 2023-06-27 广西北港大数据科技有限公司 Homography transformation-based millimeter wave radar and camera automatic calibration method
CN116448115A (en) * 2023-04-07 2023-07-18 连云港杰瑞科创园管理有限公司 Unmanned ship probability distance map construction method based on navigation radar and photoelectricity
CN116843941A (en) * 2023-05-15 2023-10-03 北京中润惠通科技发展有限公司 Intelligent analysis system for detection data of power equipment
CN117784121A (en) * 2024-02-23 2024-03-29 四川天府新区北理工创新装备研究院 Combined calibration method and system for road side sensor and electronic equipment
CN118376995A (en) * 2024-06-25 2024-07-23 深圳安德空间技术有限公司 Dam leakage detection automatic identification method and system based on shipborne ground penetrating radar

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111369541A (en) * 2020-03-06 2020-07-03 吉林大学 Vehicle detection method for intelligent automobile under severe weather condition
CN111368706A (en) * 2020-03-02 2020-07-03 南京航空航天大学 Data fusion dynamic vehicle detection method based on millimeter wave radar and machine vision
US20200301013A1 (en) * 2018-02-09 2020-09-24 Bayerische Motoren Werke Aktiengesellschaft Methods and Apparatuses for Object Detection in a Scene Based on Lidar Data and Radar Data of the Scene
CN111965636A (en) * 2020-07-20 2020-11-20 重庆大学 Night target detection method based on millimeter wave radar and vision fusion

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200301013A1 (en) * 2018-02-09 2020-09-24 Bayerische Motoren Werke Aktiengesellschaft Methods and Apparatuses for Object Detection in a Scene Based on Lidar Data and Radar Data of the Scene
CN111368706A (en) * 2020-03-02 2020-07-03 南京航空航天大学 Data fusion dynamic vehicle detection method based on millimeter wave radar and machine vision
CN111369541A (en) * 2020-03-06 2020-07-03 吉林大学 Vehicle detection method for intelligent automobile under severe weather condition
CN111965636A (en) * 2020-07-20 2020-11-20 重庆大学 Night target detection method based on millimeter wave radar and vision fusion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
金立生 等: "基于毫米波雷达和机器视觉的夜间前方车辆检测", 汽车安全与节能学报, no. 02, 15 June 2016 (2016-06-15) *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115019276A (en) * 2022-06-30 2022-09-06 南京慧尔视智能科技有限公司 Target detection method, system and related equipment
CN115019276B (en) * 2022-06-30 2023-10-27 南京慧尔视智能科技有限公司 Target detection method, system and related equipment
CN116342708A (en) * 2022-12-05 2023-06-27 广西北港大数据科技有限公司 Homography transformation-based millimeter wave radar and camera automatic calibration method
CN116129292A (en) * 2023-01-13 2023-05-16 华中科技大学 Infrared vehicle target detection method and system based on few sample augmentation
CN116129292B (en) * 2023-01-13 2024-07-26 华中科技大学 Infrared vehicle target detection method and system based on few sample augmentation
CN116448115A (en) * 2023-04-07 2023-07-18 连云港杰瑞科创园管理有限公司 Unmanned ship probability distance map construction method based on navigation radar and photoelectricity
CN116448115B (en) * 2023-04-07 2024-03-19 连云港杰瑞科创园管理有限公司 Unmanned ship probability distance map construction method based on navigation radar and photoelectricity
CN116843941A (en) * 2023-05-15 2023-10-03 北京中润惠通科技发展有限公司 Intelligent analysis system for detection data of power equipment
CN117784121A (en) * 2024-02-23 2024-03-29 四川天府新区北理工创新装备研究院 Combined calibration method and system for road side sensor and electronic equipment
CN118376995A (en) * 2024-06-25 2024-07-23 深圳安德空间技术有限公司 Dam leakage detection automatic identification method and system based on shipborne ground penetrating radar

Similar Documents

Publication Publication Date Title
CN114155501A (en) Target detection method of unmanned vehicle in smoke shielding environment
CN111274976B (en) Lane detection method and system based on multi-level fusion of vision and laser radar
Heinzler et al. Cnn-based lidar point cloud de-noising in adverse weather
CN108596081B (en) Vehicle and pedestrian detection method based on integration of radar and camera
CN107230218B (en) Method and apparatus for generating confidence measures for estimates derived from images captured by vehicle-mounted cameras
CN110728658A (en) High-resolution remote sensing image weak target detection method based on deep learning
CN112001958B (en) Virtual point cloud three-dimensional target detection method based on supervised monocular depth estimation
TW202004560A (en) Object detection system, autonomous vehicle, and object detection method thereof
CN111079556A (en) Multi-temporal unmanned aerial vehicle video image change area detection and classification method
CN110956661B (en) Method for calculating dynamic pose of visible light and infrared camera based on bidirectional homography matrix
CN114022830A (en) Target determination method and target determination device
CN115761550A (en) Water surface target detection method based on laser radar point cloud and camera image fusion
CN106096604A (en) Multi-spectrum fusion detection method based on unmanned platform
CN112731436B (en) Multi-mode data fusion travelable region detection method based on point cloud up-sampling
CN106023257A (en) Target tracking method based on rotor UAV platform
CN114495064A (en) Monocular depth estimation-based vehicle surrounding obstacle early warning method
CN108263389A (en) A kind of vehicle front false target device for eliminating and method
CN115100741B (en) Point cloud pedestrian distance risk detection method, system, equipment and medium
CN116935369A (en) Ship water gauge reading method and system based on computer vision
CN114814827A (en) Pedestrian classification method and system based on 4D millimeter wave radar and vision fusion
CN117611911A (en) Single-frame infrared dim target detection method based on improved YOLOv7
CN117576665B (en) Automatic driving-oriented single-camera three-dimensional target detection method and system
CN117215316B (en) Method and system for driving environment perception based on cooperative control and deep learning
CN116778262B (en) Three-dimensional target detection method and system based on virtual point cloud
CN113933828A (en) Unmanned ship environment self-adaptive multi-scale target detection method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination