CN113280829A - Target detection method and device based on fisheye vision and millimeter wave radar data - Google Patents

Target detection method and device based on fisheye vision and millimeter wave radar data Download PDF

Info

Publication number
CN113280829A
CN113280829A CN202110705010.5A CN202110705010A CN113280829A CN 113280829 A CN113280829 A CN 113280829A CN 202110705010 A CN202110705010 A CN 202110705010A CN 113280829 A CN113280829 A CN 113280829A
Authority
CN
China
Prior art keywords
target
data
fisheye
wave radar
millimeter wave
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110705010.5A
Other languages
Chinese (zh)
Inventor
张斌
陈凯
廖国勇
张岁寒
龚建球
邵将
刘剑雄
郭涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Yingfeng Intelligent Environmental Sanitation Technology Co ltd
Zoomlion Heavy Industry Science and Technology Co Ltd
Original Assignee
Guangdong Yingfeng Intelligent Environmental Sanitation Technology Co ltd
Zoomlion Heavy Industry Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Yingfeng Intelligent Environmental Sanitation Technology Co ltd, Zoomlion Heavy Industry Science and Technology Co Ltd filed Critical Guangdong Yingfeng Intelligent Environmental Sanitation Technology Co ltd
Priority to CN202110705010.5A priority Critical patent/CN113280829A/en
Publication of CN113280829A publication Critical patent/CN113280829A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Automation & Control Theory (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses a target detection method and a target detection device based on fisheye vision and millimeter wave radar data, wherein the method comprises the following steps: correcting the obtained fisheye visual image to obtain a corrected image; screening effective targets according to the key motion data and the statistical characteristics of the millimeter wave radar data to obtain effective targets; converting the millimeter radar coordinate into a pixel coordinate system; performing data fusion on the corrected image and the effective target to obtain fusion data, and performing target identification by adopting a DetectoRS algorithm to obtain a target identification result; and performing data fusion on the target identification result and the characteristic data of the effective target by adopting a weighted information fusion decision to obtain a target detection result. In the embodiment of the invention, the fish-eye visual image is acquired by the fish-eye camera, so that the data of target detection can be more comprehensive, the DetectoRS algorithm is adopted to identify the target in the region of interest in the fused data, and the accuracy of target detection can be effectively improved.

Description

Target detection method and device based on fisheye vision and millimeter wave radar data
Technical Field
The invention relates to the technical field of target detection, in particular to a target detection method and device based on fisheye vision and millimeter wave radar data.
Background
The intelligent automobile environment sensing layer obtains environment information through sensors such as a laser radar, a millimeter wave radar and a camera, and preparation is made for vehicle path planning and path tracking. The sensors for environment sensing have respective advantages and disadvantages, and complete environment information is difficult to acquire by adopting a single sensor, so the current mainstream scheme is to integrate the performance of each sensor and improve the accuracy and the redundancy of environment sensing by adopting an information fusion mode.
At present, the existing target detection method generally adopts a deep learning network based on a yolo algorithm to perform target detection, but the deep learning network based on the yolo algorithm has high false detection rate and missing detection rate on target detection, so that the accuracy of the target detection result obtained by the existing target detection method is low.
Disclosure of Invention
The invention provides a target detection method and device based on fisheye vision and millimeter wave radar data, and aims to solve the technical problem that the accuracy of a target detection result obtained by an existing target detection method is low due to the fact that the error detection rate and the omission factor of the existing target detection method are high.
A first embodiment of the present invention provides a target detection method based on fisheye vision and millimeter-wave radar data, including:
acquiring a fisheye visual image, and correcting the fisheye visual image through a correction algorithm to obtain a corrected image;
acquiring millimeter wave radar data, and screening effective targets according to key motion data and statistical characteristics of the millimeter wave radar data to obtain effective targets;
projecting the effective target into the corrected image, and converting the millimeter radar coordinate into a pixel coordinate system according to a conversion relation between a millimeter radar coordinate system related to the effective target and the pixel coordinate system related to the corrected image;
performing data fusion on the corrected image and the effective target to obtain fusion data, and performing target identification on the region of interest in the fusion data by adopting a DetectoRS algorithm to obtain a target identification result;
and performing data fusion on the target identification result and the characteristic data of the effective target by adopting a weighted information fusion decision to obtain a target detection result.
Further, the obtaining of the fisheye visual image and the correcting of the fisheye visual image data by the correction algorithm to obtain a corrected image specifically include:
the fisheye visual image is acquired through a fisheye camera, and the fisheye visual image data is corrected by adopting a Taylor formula and an equidistant projection model to obtain a corrected image.
Further, the millimeter wave radar data are obtained, effective target screening is carried out according to the key motion data and the statistical characteristics of the millimeter wave radar data, and effective targets are obtained, specifically:
calculating target distance, absolute speed and scattering sectional area intensity according to the key motion data and the statistical characteristics of the millimeter wave radar data, and screening the target as an effective target if the target distance is smaller than a preset distance threshold value, the absolute speed is greater than a preset speed value and the scattering sectional area intensity is within a preset intensity range.
Further, the converting the millimeter radar coordinate into the pixel coordinate system according to the conversion relationship between the millimeter radar coordinate system related to the effective target and the pixel coordinate system related to the corrected image specifically includes:
the method comprises the steps of obtaining a conversion relation among a plurality of coordinate systems according to a computer vision theory and a camera model, obtaining camera internal parameters and external parameters in the fisheye camera according to the conversion relation, and realizing conversion of millimeter wave radar coordinates and a pixel coordinate system according to the internal parameters and the external parameters, wherein the coordinate systems comprise a millimeter wave radar coordinate system, a world coordinate system, a camera coordinate system, an image coordinate system and a pixel coordinate system.
Further, the target recognition is performed on the region of interest in the fusion data by using a visual recognition algorithm to obtain a target recognition result, which specifically comprises:
and performing target identification on the region of interest in the fusion data by adopting a DetectoRS algorithm and a looking and linking twist mechanism based on target detection, adopting a recursive characteristic pyramid macroscopically and adopting switchable hole convolution microscopically to obtain a target identification result.
Further, performing data fusion on the corrected image and the effective target to obtain fusion data, specifically:
when the millimeter wave radar periodically collects millimeter wave radar data, a fisheye visual image at the current moment is collected, and an effective target obtained according to the millimeter wave radar data and a corrected image obtained from the fisheye visual image are subjected to data fusion to obtain fusion data.
Further, the method for acquiring the interest region comprises the following steps: after the effective target is projected into the corrected image, a rectangular area with a preset size is set in the projected image as a region of interest.
Further, the target recognition result includes a first target confidence coefficient and a target bounding box, the feature data includes a second target confidence coefficient and position and velocity information, and a weighted information fusion decision is adopted to perform data fusion on the target recognition result and the feature data of the effective target to obtain a target detection result, specifically:
and performing spatial fusion and time registration on the target identification result and the characteristic data by adopting a weighted information fusion decision to obtain a target detection result.
The second embodiment of the invention provides a target detection device based on fisheye vision and millimeter wave radar data, which comprises an image correction module, a detection module and a correction module, wherein the image correction module is used for acquiring a fisheye vision image and correcting the fisheye vision image through a correction algorithm to obtain a corrected image;
the target screening module is used for acquiring millimeter wave radar data and screening effective targets according to key motion data and statistical characteristics of the millimeter wave radar data to obtain effective targets;
the coordinate conversion module is used for projecting the effective target into the corrected image and converting the millimeter radar coordinate into a pixel coordinate system according to the conversion relation between the millimeter radar coordinate system related to the effective target and the pixel coordinate system related to the corrected image;
the target identification module is used for carrying out data fusion on the corrected image and the effective target to obtain fusion data, and carrying out target identification on the region of interest in the fusion data by adopting a DetectoRS algorithm to obtain a target identification result;
the target detection module is used for carrying out data fusion on the target identification result and the characteristic data of the effective target by adopting a weighted information fusion decision to obtain a target detection result; the target recognition result comprises a first target confidence degree and a target surrounding box, and the feature data comprises a second target confidence degree and position and speed information.
In the embodiment of the invention, the fisheye visual image is acquired by the fisheye camera, so that image data with larger vision can be acquired, the image data comprises obstacle image data on two sides of the camera, the data of target detection is more comprehensive, the fisheye visual image and millimeter wave data are fused for target detection, and the reliability and the comprehensiveness of the target detection can be improved.
Furthermore, the detection method and the device adopt the DetectORS algorithm to identify the target of the region of interest in the fusion data, can effectively reduce the false detection rate and the missing detection rate of target detection, and are beneficial to improving the accuracy of target detection.
Drawings
Fig. 1 is a schematic flowchart of a target detection method based on fisheye vision and millimeter-wave radar data according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an isometric projection model provided by an embodiment of the invention;
FIG. 3 is a schematic diagram of a relative relationship between coordinate systems provided by an embodiment of the present invention;
FIG. 4 is a schematic diagram of an overall structure of a SAC provided in an embodiment of the present invention;
FIG. 5 is a comparison chart of target detection results provided by the embodiment of the present invention;
fig. 6 is a schematic structural diagram of a target detection device based on fisheye vision and millimeter-wave radar data according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the description of the present application, it is to be understood that the terms "first", "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present application, "a plurality" means two or more unless otherwise specified.
In the description of the present application, it is to be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present application can be understood in a specific case by those of ordinary skill in the art.
Referring to fig. 1 to 5, in a first embodiment of the present invention, a first embodiment of the present invention provides a target detection method based on fisheye vision and millimeter wave radar data as shown in fig. 1, including:
s1, obtaining the fisheye visual image, and correcting the fisheye visual image through a correction algorithm to obtain a corrected image;
s2, millimeter wave radar data are obtained, and effective target screening is carried out according to key motion data and statistical characteristics of the millimeter wave radar data to obtain effective targets;
s3, projecting the effective target into the corrected image, and converting the millimeter radar coordinate into a pixel coordinate system according to the conversion relation between the millimeter radar coordinate system related to the effective target and the pixel coordinate system related to the corrected image;
s4, carrying out data fusion on the corrected image and the effective target to obtain fusion data, and carrying out target identification on the region of interest in the fusion data by adopting a DetectoRS algorithm to obtain a target identification result;
and S5, performing data fusion on the target recognition result and the characteristic data of the effective target by adopting a weighted information fusion decision to obtain a target detection result.
Optionally, the target recognition result includes a first target confidence and a target bounding box, the feature data includes a second target confidence and position and velocity information, and a weighted information fusion decision is adopted to perform data fusion on the target recognition result and the feature data of the effective target to obtain a target detection result, specifically: and performing spatial fusion and time registration on the target recognition result and the characteristic data by adopting a weighted information fusion decision to obtain a target detection result.
In the embodiment of the invention, the fisheye visual image is acquired by the fisheye camera, so that image data with larger vision and obstacle image data on two sides of the camera can be acquired, the data of target detection is more comprehensive, the fisheye visual image and millimeter wave data are fused and then the target detection is carried out, and the reliability and the comprehensiveness of the target detection can be improved.
Furthermore, the detection method and the device adopt a DetectoRS algorithm to identify the target of the region of interest in the fusion data, integrate the feedback of the FPN to be connected to the backbone network by applying a recursive feature pyramid network, enable the features obtained by the backbone network training to be better suitable for detection tasks, and adopt a switchable hole convolution network to replace the standard convolution on the backbone network, thereby effectively reducing the false detection rate and the missed detection rate of the target detection and being beneficial to improving the accuracy of the target detection.
As a specific implementation manner of the embodiment of the present invention, a fisheye visual image is obtained, and fisheye visual image data is corrected by a correction algorithm to obtain a corrected image, which specifically includes:
and acquiring a fisheye visual image through a fisheye camera, and correcting the fisheye visual image data by adopting a Taylor formula and an equidistant projection model to obtain a corrected image.
In the embodiment of the present invention, it is difficult for an actual fisheye camera to accurately conform to a projection model, and in the embodiment of the present invention, the calibration of the fisheye camera is realized by a projection function, specifically, the projection function of the fisheye camera is approximated by taking the first 5 terms of r with respect to θ taylor expansion:
r(θ)≈k0θ+k1θ3+k2θ5+k3θ7+k4θ9 (1)
referring to FIG. 2, assuming there is a point p (x, y, z) in the camera coordinate system, if the point p (x, y, z) is projected according to the pinhole camera model, there is no distortion, and the projected image point is p0(a, b). In the embodiment of the invention, if f is 1, p can be obtained0The point coordinate y and the incident angle θ are:
Figure BDA0003130820330000061
the distance r of the image point to the image center is compressed into r due to the distortiondThe actual pixel position is p "(x", y "), by setting | Op" | r ″d,|Op0And r. Combining the equidistant projection function with equation (1) is:
rd=fθd=k0θ+k1θ3+k2θ5+k3θ7+k4θ9 (3)
exemplarily, f is 1, and θdCoefficient k of first order term0Can be 1, and finally, the fish-eye camera model used in opencv can be obtained:
rd=θd=θ(1+k0θ2+k2θ4+k3θ6+k4θ8) (4)
by the principle of similar triangles:
Figure BDA0003130820330000071
the coordinates of the distorted point p' are calculated as:
Figure BDA0003130820330000072
after the coordinates of the distorted points are obtained, the camera internal parameters are adopted to convert the points of the image plane into a pixel coordinate system to obtain the points on the final image:
Figure BDA0003130820330000073
according to the embodiment of the invention, the points on the final image are converted into the pixel coordinate system, so that the points on the image are corrected, the corrected image of the fisheye camera is obtained, the influence of the visual blind area on the detection result can be effectively reduced, and the accuracy of target detection can be effectively improved.
As a specific implementation manner of the embodiment of the present invention, millimeter wave radar data is obtained, and effective target screening is performed according to key motion data and statistical characteristics of the millimeter wave radar data to obtain an effective target, which specifically is as follows:
and calculating the target distance, the absolute speed and the scattering sectional area intensity according to the key motion data and the statistical characteristics of the millimeter wave radar data, and screening the target as an effective target if the target distance is less than a preset distance threshold value, the absolute speed is greater than a preset speed value and the scattering sectional area intensity is within a preset intensity range.
Illustratively, the valid target screening expression is:
Figure BDA0003130820330000081
wherein V is the target relative velocity, V1RCS is the scattering cross-sectional area intensity for vehicle speed.
As a specific implementation manner of the embodiment of the present invention, the millimeter radar coordinate is converted into the pixel coordinate system according to a conversion relationship between the millimeter radar coordinate system related to the effective target and the pixel coordinate system related to the corrected image, specifically:
the method comprises the steps of obtaining a conversion relation among a plurality of coordinate systems according to a computer vision theory and a camera model, obtaining camera internal parameters and external parameters in the fisheye camera according to the conversion relation, and realizing conversion of millimeter wave radar coordinates and a pixel coordinate system according to the internal parameters and the external parameters, wherein the coordinate systems comprise a millimeter wave radar coordinate system, a world coordinate system, a camera coordinate system, an image coordinate system and a pixel coordinate system.
Please refer to fig. 3, which is a diagram illustrating a conversion between a plurality of coordinate systems according to an embodiment of the present invention. The embodiment of the invention accurately projects a point in a space detected by a millimeter wave radar to a point in a fisheye visual image plane acquired by a fisheye camera, wherein the conversion among five coordinate systems of a millimeter wave radar coordinate system, a world coordinate system, a camera coordinate system, an image coordinate system and a pixel coordinate system is involved. Specifically, according to the embodiment of the invention, the conversion relation among four coordinate systems related to the camera coordinate system can be obtained through a computer vision theory and a camera model, and the internal parameters and the external parameter values of the camera in the conversion relation are obtained by a Zhang friend camera calibration method, so that the conversion from a world coordinate system to a pixel coordinate system is realized. In a specific embodiment, the conversion between the millimeter wave radar coordinate system and the world coordinate system requires calculation of the measurement of the installation position relationship between the millimeter wave radar and the camera.
With continued reference to FIG. 3, the millimeter wave radar coordinate system is Or-XrYrZrWorld coordinate system Ow-XwYwZwThe camera coordinate system is Oc-CcTcZcThe YOC planes of the three coordinate systems coincide, the XOY planes are parallel to each other, and the world coordinate system coincides with the XOZ plane of the camera coordinate system and is parallel to the radar coordinate system XOZ plane. Z0、Z1Respectively the distances between the radar coordinate system and the camera coordinate system and between the camera coordinate system and the world coordinate system in the Z-axis direction, and H is the distance between the radar coordinate system and the camera coordinate system and the distance between the radar coordinate system and the world coordinate systemDistance in the Y-axis direction. Since the installation positions of the millimeter wave radar and the camera are relatively fixed, a radar coordinate system P (R, alpha) and a camera coordinate system P (X) can be obtainedc,Yc,Zc) The conversion relationship of (1):
Figure BDA0003130820330000091
converting the conversion relationship into a matrix form as follows:
Figure BDA0003130820330000092
illustratively, the camera model is an ideal linear pinhole model, and according to a conversion relationship between a camera coordinate system and an image coordinate system, image coordinates of the point P before distortion correction can be obtained:
Figure BDA0003130820330000093
considering the distortion influence of the lens of the camera, the image finally acquired by the camera is deviated from the real image, so that the distortion correction needs to be performed on the image coordinate, the image coordinate P (x, y) after the distortion removal is obtained, and the actual pixel position coordinate P (u, v) after the distortion removal is obtained:
Figure BDA0003130820330000094
after the distance R and the angle alpha information of a millimeter wave radar detection target are obtained, an Zhang Zhen Camera calibration method is used for obtaining internal parameters such as a focal length, a principal point coordinate and a distortion parameter of a camera, and then calculation is performed according to the sequence of the formula (10), the formula (11) and the formula (12), so that a projection coordinate of the target in an image is obtained, and conversion from a radar coordinate system P (R, alpha) to a pixel coordinate system P (u, v) is achieved.
According to the embodiment of the invention, the coordinate system of the hong radar is converted into the pixel coordinate system, so that the millimeter wave radar data and the fisheye visual image can be unified, and the reliability of fusion between the millimeter coordinate data and the fisheye visual data is improved.
As a specific implementation manner of the embodiment of the present invention, the data fusion is performed on the corrected image and the effective target to obtain fusion data, which specifically includes:
when the millimeter wave radar periodically collects millimeter wave radar data, the fisheye visual image at the current moment is collected, and the effective target obtained according to the millimeter wave radar data and the corrected image obtained from the fisheye visual image are subjected to data fusion to obtain fusion data.
In a specific implementation, the embodiment of the present invention performs time data fusion in a downward compatible manner based on a sensor with a long sampling period. When the millimeter wave radar acquires a target data message at a time node, the image information of the fisheye data at the current moment is acquired at the same time, and spatial data fusion is performed by utilizing coordinate transformation, so that the data fusion of the millimeter wave radar and the fisheye camera at the moment is completed, and the synchronization of the millimeter wave radar data and the fisheye camera data on time is realized. Further, in the embodiment of the present invention, the region of interest is set according to the data projected by the radar according to the preset aspect ratio, and an aspect ratio of a target projected by the millimeter wave radar onto the image is set to be α ═ w/h, in a specific implementation manner, w is a rectangular width, h is a rectangular height, and α is an aspect ratio set to be 1.
As a specific implementation manner of the embodiment of the present invention, a visual recognition algorithm is used to perform target recognition on the region of interest in the fusion data to obtain a target recognition result, which specifically includes:
and performing target identification on the region of interest in the fusion data by adopting a DetectoRS algorithm, adopting a recursive characteristic pyramid on a macro (RFN) based on a clicking and linking twist mechanism of target detection and adopting switchable hole convolution on a micro (SAC) to obtain a target identification result.
Specifically, the FPN provides a top-down path to fuse the features of a plurality of scale feature maps, and the multi-scale target is predicted by utilizing semantic information extracted by a high-level network and fine-grained detail feature information extracted by a low-level network. The idea of the Recursive Feature Pyramid (RFP) is to add feedback to the FPN.
Specifically, the hole convolution adds 0 in the middle of the normal convolution, equivalently expanding the kernel size of the k × k filter to k ═ k (k-1) (r-1) without increasing the number of parameters and the amount of computation. The hole convolution module has 3 main components: two global context modules and a SAC (Swichable Atrous convention) component, wherein the two context modules are respectively added before and after the SAC component. The SAC component is converted into a convolution operation by using y ═ Conv (x, w, r) to express that the weight is w, the error rates are r, the input is x, and y is the output, and the ordinary convolution is carried out to the SAC component:
Figure BDA0003130820330000111
where r is the hyperparameter of the SAC component, Δ w is a trainable weight, the transformation function S is composed of 5 × 5 kernel average pooling layer and a1 × 1 convolution layer, and r is set to 3 by default. Please refer to fig. 4, which is a schematic diagram of an overall structure of a SAC module according to an embodiment of the present invention.
Referring to fig. 5, the first group is a one-stage method, the second group is a multi-stage method, the third group is an HTC, and the fourth group is DetectoRS, which shows that the DetectoRS algorithm adopted in the embodiment of the present invention has a precision far superior to other algorithms such as yolo.
As a specific implementation manner of the embodiment of the present invention, the method for acquiring an interest region includes: after the effective target is projected into the corrected image, a rectangular area of a preset size is set in the projected image as a region of interest.
For example, the object categories detected by the embodiment of the invention are mainly two categories, namely a vehicle category a1 and a pedestrian category a 2. The key features of the millimeter wave radar data comprise a second target confidence coefficient Vi and position and speed information L; outputting a first target confidence Di and a position bounding box to the region of interest through a DetectoRS networkAnd Bd, performing final decision by adopting weighted information fusion based on the book, wherein the visual identification weight is U1, the millimeter wave radar identification weight is U2, and the probability that the target belongs to each type after information fusion is P ═ U1Vi+U2Di
The embodiment of the invention has the following beneficial effects:
according to the embodiment of the invention, the fisheye camera is adopted to collect fisheye visual images, and data fusion is carried out by combining millimeter radar wave data, so that the millimeter radar data and the fisheye tree fern data are complemented, targets which are ignored by vision can be detected by the radar, and reasonable coordination can be obtained when information of the two conflicts, thereby effectively improving the accuracy and reliability of target detection;
according to the embodiment of the invention, effective target screening is carried out on millimeter wave radar data, an area of interest is selected, space-time information fusion is carried out on the millimeter wave radar and camera data through coordinate transformation and time registration, and target detection is carried out by improving a network target recognition algorithm based on a DetectoRS algorithm, so that the advantages of the millimeter wave radar and fisheye visual recognition can be fully exerted, and the comprehensiveness of the obtained environmental information can be improved;
according to the embodiment of the invention, a DetectoRS algorithm is adopted to carry out target identification on the region of interest in the fusion data, the recursive feature pyramid network is applied, the feedback of the integrated FPN is connected to the backbone network, so that the features obtained by the backbone network training can better adapt to the detection task, and the switchable hole convolution network is adopted to replace the standard convolution on the backbone network, thereby effectively reducing the false detection rate and the missed detection rate of target detection and being beneficial to improving the accuracy of target detection.
Referring to fig. 6, a second embodiment of the present invention provides a target detection apparatus based on fisheye vision and millimeter-wave radar data, including:
the image correction module 10 is used for acquiring a fisheye visual image and correcting the fisheye visual image through a correction algorithm to obtain a corrected image;
the target screening module 20 is configured to obtain millimeter wave radar data, and perform effective target screening according to the key motion data and the statistical characteristics of the millimeter wave radar data to obtain an effective target;
the coordinate conversion module 30 is configured to project the effective target into the corrected image, and convert the millimeter radar coordinate into a pixel coordinate system according to a conversion relationship between a millimeter radar coordinate system related to the effective target and a pixel coordinate system related to the corrected image;
the target identification module 40 is used for performing data fusion on the corrected image and the effective target to obtain fusion data, and performing target identification on the region of interest in the fusion data by adopting a DetectoRS algorithm to obtain a target identification result;
the target detection module 50 is configured to perform data fusion on the target identification result and the feature data of the effective target by using a weighted information fusion decision to obtain a target detection result; the target recognition result comprises a first target confidence coefficient and a target surrounding box, and the feature data comprises a second target confidence coefficient and position and speed information.
Optionally, the target recognition result includes a first target confidence and a target bounding box, the feature data includes a second target confidence and position and velocity information, and the target recognition module 50 is specifically configured to: and performing spatial fusion and time registration on the target recognition result and the characteristic data by adopting a weighted information fusion decision to obtain a target detection result.
In the embodiment of the invention, the fisheye visual image is acquired by the fisheye camera, so that image data with larger vision can be acquired, the image data comprises obstacle image data on two sides of the camera, the data of target detection is more comprehensive, the fisheye visual image and millimeter wave data are fused for target detection, and the reliability and the comprehensiveness of the target detection can be improved.
Furthermore, the detection method and the device adopt a DetectoRS algorithm to identify the target of the region of interest in the fusion data, integrate the feedback of the FPN to be connected to the backbone network by applying a recursive feature pyramid network, enable the features obtained by the backbone network training to be better suitable for detection tasks, and adopt a switchable hole convolution network to replace the standard convolution on the backbone network, thereby effectively reducing the false detection rate and the missed detection rate of the target detection and being beneficial to improving the accuracy of the target detection.
The foregoing is a preferred embodiment of the present invention, and it should be noted that it would be apparent to those skilled in the art that various modifications and enhancements can be made without departing from the principles of the invention, and such modifications and enhancements are also considered to be within the scope of the invention.

Claims (9)

1. A target detection method based on fisheye vision and millimeter wave radar data is characterized by comprising the following steps:
acquiring a fisheye visual image, and correcting the fisheye visual image through a correction algorithm to obtain a corrected image;
acquiring millimeter wave radar data, and screening effective targets according to key motion data and statistical characteristics of the millimeter wave radar data to obtain effective targets;
projecting the effective target into the corrected image, and converting the millimeter radar coordinate into a pixel coordinate system according to a conversion relation between a millimeter radar coordinate system related to the effective target and the pixel coordinate system related to the corrected image;
performing data fusion on the corrected image and the effective target to obtain fusion data, and performing target identification on the region of interest in the fusion data by adopting a DetectoRS algorithm to obtain a target identification result;
and performing data fusion on the target identification result and the characteristic data of the effective target by adopting a weighted information fusion decision to obtain a target detection result.
2. The target detection method based on fisheye vision and millimeter wave radar data of claim 1, wherein the obtaining of the fisheye vision image and the correction of the fisheye vision image data by a correction algorithm to obtain a corrected image are specifically:
the fisheye visual image is acquired through a fisheye camera, and the fisheye visual image data is corrected by adopting a Taylor formula and an equidistant projection model to obtain a corrected image.
3. The target detection method based on fisheye vision and millimeter wave radar data of claim 1, characterized in that the millimeter wave radar data is obtained, and effective target screening is performed according to key motion data and statistical characteristics of the millimeter wave radar data to obtain an effective target, specifically:
calculating target distance, absolute speed and scattering sectional area intensity according to the key motion data and the statistical characteristics of the millimeter wave radar data, and screening the target as an effective target if the target distance is smaller than a preset distance threshold value, the absolute speed is greater than a preset speed value and the scattering sectional area intensity is within a preset intensity range.
4. The method for detecting a target based on fisheye vision and millimeter-wave radar data according to claim 1, wherein the converting the millimeter radar coordinate into the pixel coordinate system according to a conversion relationship between a millimeter radar coordinate system related to the effective target and a pixel coordinate system related to the rectified image is specifically as follows:
the method comprises the steps of obtaining a conversion relation among a plurality of coordinate systems according to a computer vision theory and a camera model, obtaining camera internal parameters and external parameters in the fisheye camera according to the conversion relation, and realizing conversion of millimeter wave radar coordinates and a pixel coordinate system according to the internal parameters and the external parameters, wherein the coordinate systems comprise a millimeter wave radar coordinate system, a world coordinate system, a camera coordinate system, an image coordinate system and a pixel coordinate system.
5. The target detection method based on fish-eye vision and millimeter wave radar data of claim 1, wherein the target identification is performed on the region of interest in the fused data by using a vision identification algorithm to obtain a target identification result, specifically:
and performing target identification on the region of interest in the fusion data by adopting a DetectoRS algorithm and a looking and linking twist mechanism based on target detection, adopting a recursive characteristic pyramid macroscopically and adopting switchable hole convolution microscopically to obtain a target identification result.
6. The target detection method based on fisheye vision and millimeter wave radar data of claim 1, characterized in that the corrected image and the effective target are subjected to data fusion to obtain fusion data, specifically:
when the millimeter wave radar collects millimeter wave radar data periodically, a fisheye visual image at the current moment is collected at the same time, and data fusion is carried out on an effective target obtained according to the millimeter wave radar data and a corrected image obtained from the fisheye visual image, so that fusion data are obtained.
7. The target detection method based on fish-eye vision and millimeter wave radar data as claimed in claim 1, wherein the method for acquiring the interest area comprises: after the effective target is projected into the corrected image, a rectangular area with a preset size is set in the projected image as a region of interest.
8. The target detection method based on fisheye vision and millimeter wave radar data of claim 1, wherein the target recognition result comprises a first target confidence degree and a target bounding box, the feature data comprises a second target confidence degree and position and velocity information, and a weighted information fusion decision is adopted to perform data fusion on the target recognition result and the feature data of the effective target to obtain a target detection result, specifically:
and performing spatial fusion and time registration on the target identification result and the characteristic data by adopting a weighted information fusion decision to obtain a target detection result.
9. A target detection device based on fisheye vision and millimeter wave radar data, comprising:
the image correction module is used for acquiring a fisheye visual image and correcting the fisheye visual image through a correction algorithm to obtain a corrected image;
the target screening module is used for acquiring millimeter wave radar data and screening effective targets according to key motion data and statistical characteristics of the millimeter wave radar data to obtain effective targets;
the coordinate conversion module is used for projecting the effective target into the corrected image and converting the millimeter radar coordinate into a pixel coordinate system according to the conversion relation between the millimeter radar coordinate system related to the effective target and the pixel coordinate system related to the corrected image;
the target identification module is used for carrying out data fusion on the corrected image and the effective target to obtain fusion data, and carrying out target identification on the region of interest in the fusion data by adopting a DetectoRS algorithm to obtain a target identification result;
and the target detection module is used for carrying out data fusion on the target identification result and the characteristic data of the effective target by adopting a weighted information fusion decision to obtain a target detection result.
CN202110705010.5A 2021-06-24 2021-06-24 Target detection method and device based on fisheye vision and millimeter wave radar data Pending CN113280829A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110705010.5A CN113280829A (en) 2021-06-24 2021-06-24 Target detection method and device based on fisheye vision and millimeter wave radar data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110705010.5A CN113280829A (en) 2021-06-24 2021-06-24 Target detection method and device based on fisheye vision and millimeter wave radar data

Publications (1)

Publication Number Publication Date
CN113280829A true CN113280829A (en) 2021-08-20

Family

ID=77285518

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110705010.5A Pending CN113280829A (en) 2021-06-24 2021-06-24 Target detection method and device based on fisheye vision and millimeter wave radar data

Country Status (1)

Country Link
CN (1) CN113280829A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115034257A (en) * 2022-05-09 2022-09-09 西北工业大学 Cross-modal information target identification method and device based on feature fusion
CN116148801A (en) * 2023-04-18 2023-05-23 深圳市佰誉达科技有限公司 Millimeter wave radar-based target detection method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110197466A (en) * 2019-05-30 2019-09-03 河海大学常州校区 A kind of wide-angle fish eye images antidote
CN111368706A (en) * 2020-03-02 2020-07-03 南京航空航天大学 Data fusion dynamic vehicle detection method based on millimeter wave radar and machine vision
CN111976788A (en) * 2019-05-22 2020-11-24 杭州电子科技大学 High-speed rail transit network radar system
CN112115897A (en) * 2020-09-24 2020-12-22 深圳市赛为智能股份有限公司 Multi-pointer instrument alarm detection method and device, computer equipment and storage medium
CN112215306A (en) * 2020-11-18 2021-01-12 同济大学 Target detection method based on fusion of monocular vision and millimeter wave radar
CN112835037A (en) * 2020-12-29 2021-05-25 清华大学 All-weather target detection method based on fusion of vision and millimeter waves

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111976788A (en) * 2019-05-22 2020-11-24 杭州电子科技大学 High-speed rail transit network radar system
CN110197466A (en) * 2019-05-30 2019-09-03 河海大学常州校区 A kind of wide-angle fish eye images antidote
CN111368706A (en) * 2020-03-02 2020-07-03 南京航空航天大学 Data fusion dynamic vehicle detection method based on millimeter wave radar and machine vision
CN112115897A (en) * 2020-09-24 2020-12-22 深圳市赛为智能股份有限公司 Multi-pointer instrument alarm detection method and device, computer equipment and storage medium
CN112215306A (en) * 2020-11-18 2021-01-12 同济大学 Target detection method based on fusion of monocular vision and millimeter wave radar
CN112835037A (en) * 2020-12-29 2021-05-25 清华大学 All-weather target detection method based on fusion of vision and millimeter waves

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JUHO KANNALA 等: ""A generic camera model and calibration method for conventional, wide-angle, and fish-eye lenses"", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》, vol. 28, no. 8, pages 1335 - 1338 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115034257A (en) * 2022-05-09 2022-09-09 西北工业大学 Cross-modal information target identification method and device based on feature fusion
CN116148801A (en) * 2023-04-18 2023-05-23 深圳市佰誉达科技有限公司 Millimeter wave radar-based target detection method and system

Similar Documents

Publication Publication Date Title
CN113673425B (en) Multi-view target detection method and system based on Transformer
CN113111887B (en) Semantic segmentation method and system based on information fusion of camera and laser radar
CN103886107B (en) Robot localization and map structuring system based on ceiling image information
CN108445480A (en) Mobile platform based on laser radar adaptively extends Target Tracking System and method
CN113280829A (en) Target detection method and device based on fisheye vision and millimeter wave radar data
CN114694011A (en) Fog penetrating target detection method and device based on multi-sensor fusion
CN115376109B (en) Obstacle detection method, obstacle detection device, and storage medium
CN110910389B (en) Laser SLAM loop detection system and method based on graph descriptor
CN112130153A (en) Method for realizing edge detection of unmanned vehicle based on millimeter wave radar and camera
JP2020149641A (en) Object tracking device and object tracking method
CN111818274A (en) Optical unmanned aerial vehicle monitoring method and system based on three-dimensional light field technology
CN115376028A (en) Target detection method based on dense feature point splicing and improved YOLOV5
Lian et al. Semantic fusion infrastructure for unmanned vehicle system based on cooperative 5G MEC
CN113984037A (en) Semantic map construction method based on target candidate box in any direction
CN112381873B (en) Data labeling method and device
CN117109561A (en) Remote two-dimensional code map creation and positioning method and system integrating laser positioning
CN115984768A (en) Multi-target pedestrian real-time detection positioning method based on fixed monocular camera
WO2011047508A1 (en) Embedded vision tracker and mobile guiding method for tracking sequential double color beacons array with extremely wide-angle lens
CN116413725A (en) Barrier detection method based on camera and millimeter wave radar data fusion
CN116129234A (en) Attention-based 4D millimeter wave radar and vision fusion method
CN115767424A (en) Video positioning method based on RSS and CSI fusion
CN115937842A (en) Colored LiDAR point cloud object detection method and system for robot
Chen et al. Image detector based automatic 3D data labeling and training for vehicle detection on point cloud
Ahmad et al. Model and placement optimization of a sky surveillance visual sensor network
CN113240670A (en) Image segmentation method for object to be operated in live-wire operation scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination