CN114724110A - Target detection method and device - Google Patents

Target detection method and device Download PDF

Info

Publication number
CN114724110A
CN114724110A CN202210369658.4A CN202210369658A CN114724110A CN 114724110 A CN114724110 A CN 114724110A CN 202210369658 A CN202210369658 A CN 202210369658A CN 114724110 A CN114724110 A CN 114724110A
Authority
CN
China
Prior art keywords
target
distance
vehicle
camera
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210369658.4A
Other languages
Chinese (zh)
Inventor
包鹏
王曦
王若瑜
邱芸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Tiantong Weishi Electronic Technology Co ltd
Original Assignee
Tianjin Tiantong Weishi Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Tiantong Weishi Electronic Technology Co ltd filed Critical Tianjin Tiantong Weishi Electronic Technology Co ltd
Priority to CN202210369658.4A priority Critical patent/CN114724110A/en
Publication of CN114724110A publication Critical patent/CN114724110A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/862Combination of radar systems with sonar systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/93Sonar systems specially adapted for specific applications for anti-collision purposes
    • G01S15/931Sonar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2013/9315Monitoring blind spots
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2013/9323Alternative operation using light waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2013/9324Alternative operation using ultrasonic waves

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Acoustics & Sound (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to a target detection method and equipment, wherein the method comprises the following steps: a. acquiring information around the vehicle by using a camera, a millimeter wave radar and an ultrasonic radar, and determining a driving environment and a target distance; b. respectively acquiring perception information by using a camera and a millimeter wave radar; carrying out classified retrieval by utilizing a camera and a millimeter wave radar; utilizing a camera to assist an ultrasonic radar to carry out triangular ranging; c. and determining a fusion mode of information obtained by the camera, the millimeter wave radar and the ultrasonic radar according to the target distance and the driving environment, tracking and matching a plurality of targets, and outputting a fusion result. The invention utilizes a plurality of sensors for fusion, and can provide triple redundancy under different working principles so as to realize the obstacle output of target level perception.

Description

Target detection method and device
Technical Field
The invention relates to a target detection method and device.
Background
With the development of scientific technology, automatic driving becomes a very interesting and challenging field, and especially the automatic driving function in parking scenes is a very important exploration direction. The memory parking/passenger-replacing parking system is a branch of an automatic driving system, and is beneficial to solving the problems of difficulty in finding parking spaces and parking in a large parking lot. In the parking process, the sensors are usually used for data acquisition and processing to sense surrounding information, and thus the functions of parking space searching, obstacle detection and the like are realized.
The perception of obstacles in the existing automatic driving technology is mostly finished by means of laser radars, but the service life of the laser radars is short, the price is high, and if 360-degree perception is covered on one vehicle, a laser radar sensor more than a plurality of sensors is often needed, so that the cost of the whole vehicle is greatly improved. Of course, other sensing devices capable of implementing the above functions exist in the prior art, such as a vision sensor, a millimeter wave radar, an ultrasonic radar, and the like.
The obstacle detection based on vision can realize accurate target classification, thereby facilitating the cluster screening of obstacles. However, the distance measurement result of the visual detection is not accurate, and especially when the distance measurement is performed on a long-distance target, the distance measurement error is very large due to sparse pixel points. In addition, the target speed measurement realized by visual detection is also based on the mode of calculating the difference between continuous frames to estimate the speed, so that the precision reliability is low, the target of a non-deep learning sample cannot be identified, and a large blind area exists when the target is close to the distance (such as on a vehicle).
The obstacle detection based on the millimeter wave Radar can realize very accurate distance measurement and speed measurement, however, the millimeter wave Radar has poor target classification capability, and the car factories judge targets with directivity according to the range of the RCS (Radar Cross Section) fixed by the vehicle obstacles, so that pedestrians are poor in detection, static obstacles are lower than dynamic obstacles in detection due to the working property of the millimeter wave Radar, blind areas exist in a short distance, the targets can be influenced by scenes such as tunnel railings, multipath reflection and target virtual scenes can be caused, and the targets are not real.
The distance measurement accuracy based on the detection of the ultrasonic radar can reach centimeter level, but the distance measurement accuracy can only be used for parking scenes, the detection range is short, the classification is poor, the radial distance of the target can only be obtained in a TOF working mode, and the transverse and longitudinal position information of the target cannot be accurately obtained.
Therefore, the existing sensing devices have defects when used alone, and therefore a technology capable of fusing information collected by each sensing device to realize target detection is urgently needed.
Disclosure of Invention
The invention aims to provide a target detection method and device.
In order to achieve the above object, the present invention provides a target detection method, comprising the steps of:
a. acquiring information around the vehicle by using a camera, a millimeter wave radar and an ultrasonic radar, and determining a driving environment and a target distance;
b. respectively acquiring perception information by using a camera and a millimeter wave radar; carrying out classified retrieval by utilizing a camera and a millimeter wave radar; utilizing a camera to assist an ultrasonic radar to carry out triangular ranging;
c. and determining a fusion mode of information obtained by the camera, the millimeter wave radar and the ultrasonic radar according to the target distance and the driving environment, tracking and matching a plurality of targets, and outputting a fusion result.
In accordance with one aspect of the present invention,
according to the perception information acquired by the camera and the running speed of the vehicle, determining the running environment comprises the following steps: presetting a speed threshold value for vehicle running, detecting that the running environment is a highway area by the camera, and determining that the running environment is a high-speed environment when the running speed of the vehicle is greater than the speed threshold value; the camera detects that the driving environment is a highway area, the driving speed of the vehicle is less than or equal to the speed threshold value, and the driving environment is determined to be a low-speed environment; the camera detects that the driving environment is an off-highway area, and determines that the driving environment is a low-speed environment; (ii) a
Presetting a first distance threshold and a second distance threshold of a target distance, wherein the target distance is greater than the second distance threshold, and determining that the target is a long distance; the target distance is greater than the first distance threshold and less than or equal to a second distance threshold, and the target is determined to be a middle distance; and determining that the target is a short distance when the target distance is less than or equal to a first distance threshold.
According to one aspect of the invention, when the low-speed environment and the target are close distances, the fusion mode is that information obtained by a look-around camera and information obtained by an ultrasonic radar are fused; when the low-speed environment and the target are in a medium distance, the fusion mode is that information obtained by a look-around camera and information obtained by an angle millimeter wave radar are fused; and when the low-speed environment and the target are long-distance, the fusion mode is that information obtained by a forward-looking camera and information obtained by a forward millimeter wave radar are fused.
According to an aspect of the present invention, when the high-speed environment, the target being a close range, the camera, the millimeter wave radar, and the ultrasonic radar operate independently; when the high-speed environment and the target are in the middle distance, the fusion mode is that information obtained by a look-around camera and information obtained by an angle millimeter wave radar are fused; and when the high-speed environment and the target are long-distance, the fusion mode is that information obtained by a forward-looking camera and information obtained by a forward millimeter wave radar are fused.
According to one aspect of the invention, a front view camera and a front millimeter wave radar are utilized on the front side of the vehicle to provide a primary field of view;
detecting an obstacle on the side of the vehicle through visual fusion and target tracking prediction, and performing blind compensation by using an ultrasonic radar in the front and back of the vehicle;
and detecting the target by using the angle radar and the all-round looking camera at the side and the rear of the vehicle, and detecting the target speed by using the angle radar.
According to one aspect of the invention, in the step (b), a camera is used for target classification retrieval, including image preprocessing and detection tracking;
performing dynamic and static classification retrieval by using a millimeter wave radar, and filtering a low-confidence target;
and performing spatial target association on the targets detected by the camera and the millimeter wave radar, filtering the associable targets, and adding the unassociated targets to the container.
According to one aspect of the invention, spatial target association includes constructing a vehicle coordinate system, a map coordinate system, a locomotive coordinate system, and a world coordinate system;
when a vehicle coordinate system is constructed, placing obstacles around a vehicle body, and outputting the distance from a target of the obstacle to the center of a rear axle of the vehicle;
when a map coordinate system is constructed, the current position of the vehicle is positioned by using slam, the original relative position of an obstacle target in the vehicle coordinate system is rotated through an RT matrix to obtain distances Sx and Sy, and the distances Sx and Sy are mapped into slam global coordinates of a map, wherein the mapping formula is as follows:
Sx=cos(yaw)*Оx-sin(yaw)*Оy;
Sy=-sin(yaw)*Оx+cos(yaw)*Оy;
map x equals Slam coordinate + Sx;
map y equals Slam coordinate + Sy;
wherein Map x/y represents the coordinate position of the obstacle under the global coordinate of SLAM; cox/oy represents the position of the obstacle under the vehicle coordinate system; Sx/Sy represents the distance position after RT conversion; yaw represents the heading at the current slam coordinate;
when a locomotive coordinate system is constructed, taking the outer edge of a front engine hood of a vehicle as a closest point as a measuring point;
when a world coordinate system is constructed, measuring a point of an obstacle target closest to the center of a rear axle of a vehicle, recording the distance between the point and the center of the rear axle in the x direction and the distance between the point and the center of the rear axle in the y direction, and respectively recording the distance as (x) directiongt,ygt)。
According to an aspect of the present invention, in the step (b), the triangulation includes:
measuring the distances of each ultrasonic radar on the vehicle body in the x direction and the y direction relative to the center of a rear axle of the vehicle, recording the coordinates of two adjacent ultrasonic radars as (x1, y1) and (x2, y2), and obtaining the radial distances d1 and d2 through TOF measurement;
and the relative distance between two adjacent ultrasonic radars is rotated around the X axis under the vehicle coordinate system to obtain alpha:
Figure BDA0003587628350000051
and further obtaining an included angle beta between the ultrasonic radar and the relative distance as follows:
Figure BDA0003587628350000052
wherein l is the relative distance between two adjacent ultrasonic radars, and is obtained by installation and calibration;
and carrying out multipoint positioning to obtain the distances x and y of the obstacle target relative to the rear axle of the vehicle body as follows:
Figure BDA0003587628350000053
and carrying out triangulation positioning on the ultrasonic radars arranged at the front and the rear of the vehicle to obtain the position information of the obstacle relative to the vehicle.
According to an aspect of the present invention, in the step (c), the multiple target tracking matching is based on providing the camera with distance information and target speed information of a world coordinate system, and pixel coordinates and bbox as a reference;
the camera and the millimeter wave radar are subjected to minimum value matching in a world coordinate system, and data of the millimeter wave radar are mapped to a pixel coordinate system and subjected to range matching with bbox and the pixel coordinate;
assigning a visual classification and width and height to the associated target;
the relative distance and speed provided by the millimeter wave radar are used as standard obstacle output formats.
According to one aspect of the invention, target classification and relative position information are obtained by using targets for visual detection, and the distance of all targets detected by using a millimeter wave radar under a world coordinate system is taken as a reference to perform maximum matching through a bipartite graph of a Hungarian algorithm;
mapping the position information acquired by the millimeter wave radar to a pixel coordinate system, and converting the position of the millimeter wave radar to the pixel coordinate system during mapping;
if a plurality of radar targets exist in the bbox range, matching the output targets by adopting a minimum extremum, matching by adopting a world coordinate system in a short distance, and matching by adopting a pixel coordinate system in a long distance;
and combining different matches of the target under a world coordinate system and a pixel coordinate system for output.
According to an aspect of the present invention, in the step (c), the matching of the target under different coordinate systems is further followed by kalman filtering.
According to one aspect of the invention, the testing of the method includes algorithm output and error calculation;
when algorithm output is carried out, the distance measurement output of the obstacle of the target detection algorithm comprises an x-axis direction and a y-axis direction, a coordinate system of the distance measurement output is the same as a vehicle coordinate system, and the output is marked as (x)pr,ypr);
When error calculation is carried out, the average absolute error of relative distances is used for calculating the distance measurement error, the accuracy of distance measurement is measured by utilizing the Manhattan distance, and the calculation formula is as follows:
Figure BDA0003587628350000061
wherein x isdiffError in the x-axis direction, ydiffError in the y-axis direction, X is the maximum error rate of the obstacle, and XiA plurality of obstacle targets;
during measurement, the obstacle is placed at a fixed distance from the vehicle body to move the vehicle back and forth, and the distance between the obstacle and the vehicle body in the x direction is kept unchanged;
an obstacle is disposed on the extension line of the rear axle such that the distance in the x direction from the initial position of the obstacle to the rear axle of the vehicle is 0m, and the distances in the x direction and the y direction from the target to the center of the rear axle of the vehicle are measured.
An apparatus comprising a storage medium and a processor, the storage medium storing a computer program that, when executed by the processor, implements an object detection method.
According to the concept of the invention, a target detection method based on multi-sensor fusion is provided aiming at a low-speed automatic driving scene, namely, the fusion of multiple sensors is carried out by utilizing various mass production vehicle body sensor information such as a camera (vision), ultrasonic waves, millimeter wave radars and the like. The vision sensor has wide data sensing range and large data information amount, and the ultrasonic and millimeter wave radar sensors are less affected by weather, illumination and the like, so that the integration of the sensors can provide triple redundancy under different working principles, the output of the target-level sensing barrier is realized, safer guarantee information is provided for the automatic driving function under the scene of memorizing parking or automatic passenger-replacing parking, the laser radar can be perfectly replaced to realize the detection of the barrier, and the low-cost automatic driving mass production scheme of the automobile without the laser radar, RTK positioning and high-precision maps is realized.
According to one scheme of the invention, the complementation is carried out in a multi-sensor fusion mode with vision as the main part and radar as the auxiliary part. The visual target detection has high detection rate and classification for fixed obstacles such as pedestrians, vehicles and the like, so that the defects of poor classification and high false detection rate of the millimeter wave radar and the ultrasonic radar are overcome; the millimeter wave radar detection has higher accuracy on the speed and position information of obstacles such as pedestrians, vehicles and the like, so that the defect of poor accuracy of visual and ultrasonic detection on the position and speed is overcome; the ultrasonic radar detects and can realize that the short distance barrier detects and mends the blind area to compensate vision and millimeter wave radar's blind area, can carry out output detection to the target outside the non-vision study training simultaneously, supply the target that non-metal class millimeter wave detected out, make the sensor of the different detection methods of three fuse the perception that realizes the automobile body full coverage mutually.
Drawings
FIG. 1 schematically illustrates a general flow diagram of a target detection method according to one embodiment of the invention;
FIG. 2 schematically illustrates a detailed fusion flow diagram of vision and radar in accordance with an embodiment of the present invention;
FIG. 3 is a schematic representation of a vehicle body sensor arrangement according to an embodiment of the present invention;
FIG. 4 schematically illustrates a parking lot detection schematic according to an embodiment of the present invention;
FIG. 5 schematically illustrates a road surface detection scheme in accordance with an embodiment of the present invention;
FIG. 6 schematically illustrates a front view and millimeter wave fused bird's eye view of an embodiment of the present invention;
FIG. 7 schematically illustrates a visual inspection of an embodiment of the invention;
FIG. 8 schematically illustrates a radar detection schematic according to an embodiment of the present invention;
FIGS. 9 and 10 are schematic diagrams respectively illustrating four coordinate systems involved in image processing according to an embodiment of the present invention;
FIG. 11 is a schematic representation of visual inspection of an embodiment of the present invention when matching radar coordinate systems;
FIG. 12 schematically shows a diagram of targets in the bbox range after matching according to an embodiment of the invention;
FIGS. 13 and 14 are schematic diagrams illustrating a triangulation scene graph and a position graph of different positions of two water codes according to an embodiment of the invention;
FIG. 15 is a schematic representation of a cone-bucket triangulation location diagram according to one embodiment of the present invention;
FIG. 16 schematically illustrates an ultrasonic triangulation scheme according to an embodiment of the invention;
FIG. 17 is a schematic representation of a vehicle coordinate system in accordance with an embodiment of the present invention;
FIG. 18 is a schematic representation of a map coordinate system of an embodiment of the present invention;
FIG. 19 is a schematic representation of an error calculation of an embodiment of the present invention;
FIG. 20 is a diagram schematically illustrating an example of an obstacle test according to an embodiment of the present invention;
FIG. 21 is a schematic representation of an obstacle testing scenario in accordance with an embodiment of the present invention;
FIG. 22 schematically illustrates an obstacle testing grid diagram in accordance with an embodiment of the present invention;
FIG. 23 is a schematic representation of three ultrasonic ranging scenarios in accordance with an embodiment of the present invention;
FIG. 24 is a schematic representation of an ultrasonic ranging reference point selection diagram in accordance with an embodiment of the present invention;
FIG. 25 is a graph schematically showing results of ultrasonic measurements according to an embodiment of the present invention;
FIG. 26 is a schematic representation of an ultrasonic triangulation relationship diagram in accordance with an embodiment of the present invention;
FIG. 27 is a schematic illustration of an ultrasonic installation calibration chart in accordance with an embodiment of the present invention;
FIG. 28 schematically illustrates the manner of policy matching for one embodiment of the present invention;
FIG. 29 shows a schematic representation of an embodiment of the present invention before ultrasound scanning to a target;
FIG. 30 shows a schematic view of an embodiment of the present invention after ultrasonic scanning to a target;
FIG. 31 is a diagram schematically illustrating ultrasound and look-around test results for one embodiment of the present invention;
FIG. 32 is a schematic diagram illustrating detailed fusion of object detection in accordance with an embodiment of the present invention.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
The present invention is described in detail below with reference to the drawings and the specific embodiments, which are not repeated herein, but the embodiments of the present invention are not limited to the following embodiments.
Referring to fig. 1, the target detection method of the present invention is implemented based on multi-sensor fusion, and can be used in scenes such as memory parking. The method comprises the steps of firstly, collecting information around a vehicle by using a camera, a millimeter wave radar and an ultrasonic radar, determining a driving environment and a target distance, and then respectively obtaining perception information by using the camera and the millimeter wave radar; carrying out classified retrieval by utilizing a camera and a millimeter wave radar; and (3) utilizing the camera to assist the ultrasonic radar to carry out triangular ranging, finally determining a fusion mode of the information obtained by the camera, the millimeter wave radar and the ultrasonic radar according to the target distance and the driving environment, tracking and matching a plurality of targets, and outputting a fusion result. As shown in fig. 2, the present invention uses a camera to perform a classified target search, including image preprocessing and detection tracking; and (4) performing dynamic and static classification retrieval by using a millimeter wave radar, and filtering low-confidence-coefficient targets. And then performing spatial target association on the targets detected by the camera and the millimeter wave radar, filtering the associable targets, and adding the unassociated targets into the container. The visual target classification retrieval adopts a mode of feature extraction and deep learning, and the retrieved categories can comprise automobiles, trucks, pedestrians, tricycles, riding persons, cone barrels and the like. And the dynamic and static classification retrieval is to classify the targets by adopting the speed and the acceleration of the targets, wherein if the dynamic targets are a set, the static targets are a set. And the classification stage does not need to know the coordinates of the targets, but only the coordinate attributes of the targets can be used as the basis for spatial target association. The 'container' is exposed in the form of an array containing all attributes of the targets, the effect is to perform backup processing on all unassociated targets, and once the previously associated targets are lost for a long time or new targets appear, the container can be subjected to re-matching association.
The whole fusion framework is divided into three parts of long distance, medium and near distance and short distance according to the properties of the sensor, the detection range and the scene library part, and the fusion module can judge that the self environment is a high-speed scene and a parking lot scene for the current scene such as traffic lights, lane lines, garage line, signboard, vehicle information and the like by preferentially perceiving information through the vision such as forward sight, around sight and the like, and self-adaptive fusion logic is adopted according to different scenes. And then when the target barrier is judged to enter the far, middle and near three areas, the fusion among different sensors is split.
In the invention, the driving environment is determined according to the perception information acquired by the camera and the driving speed of the vehicle, and the method comprises the following steps: and through presetting a speed threshold, a first distance threshold and a second distance threshold, the running speed and the running environment of the vehicle are determined.
Specifically, a speed threshold value of vehicle running is preset, a camera detects that the running environment is a highway area, the running speed of the vehicle is greater than the speed threshold value, and the running environment is determined to be a high-speed environment; the camera detects that the driving environment is a road area, the driving speed of the vehicle is less than or equal to a speed threshold value, and the driving environment is determined to be a low-speed environment; the camera detects that the driving environment is an off-highway area, and determines that the driving environment is a low-speed environment; (ii) a
For example, a speed threshold value of 30km/h is preset, if the vehicle is determined to run on an expressway according to sensing information acquired by a camera, the running environment is determined to be a high-speed environment when the running speed of the vehicle is greater than 30km/h in combination with the running speed of the vehicle; and when the running speed of the vehicle is less than 30km/h, determining that the running environment is a low-speed environment.
Similarly, for example, a preset speed threshold value of 30km/h is used for determining that the vehicle runs on the parking lot and determining that the running environment is a low-speed environment according to the sensing information acquired by the camera. Even if the vehicle itself travels more than 30km/h, a low speed environment is recognized.
Presetting a first distance threshold and a second distance threshold of a target distance, wherein the target distance is greater than the second distance threshold, and determining that the target is a long distance; the target distance is greater than the first distance threshold and less than or equal to the second distance threshold, and the target is determined to be a middle distance; and determining that the target is a short distance when the target distance is less than or equal to the first distance threshold.
Referring to fig. 32, in the present invention, when the target is a short distance in a low-speed environment, the fusion mode is to use information obtained by the look-around camera and information obtained by the ultrasonic radar for fusion; when the target is a middle distance in a low-speed environment, the fusion mode is that information obtained by a look-around camera and information obtained by an angle millimeter wave radar are fused; and when the target is a long distance in a low-speed environment, the fusion mode is that information obtained by the front-view camera and information obtained by the front millimeter wave radar are fused.
In the invention, when the high-speed environment and the target are close, the camera, the millimeter wave radar and the ultrasonic radar work independently; when the target is a middle distance in a high-speed environment, the fusion mode is that information obtained by a look-around camera and information obtained by an angle millimeter wave radar are fused; and when the target is a long distance in a high-speed environment, the fusion mode is that information obtained by the front-view camera and information obtained by the front millimeter wave radar are fused.
For example, if the scene library (driving environment) judges that the scene is a high-speed urban scene, preferentially outputting a target in a current lane line, fusing and calling target results of a front-view camera and a front millimeter wave radar, carrying out complementary fusion if only one of the front-view camera and the front millimeter wave radar detects the target, carrying out competitive fusion if the two cameras detect the same target, and carrying out multi-target matching and Kalman tracking filtering by using a Hungary algorithm in a mode of mainly vision and secondarily radar. For another example, if the scene library (driving environment) judges that the scene is a high-speed and urbanized scene, the fusion mode of the all-round-looking camera and the angle millimeter wave radar is called, mainly the minimum value matching of the distance is adopted, the forward looking of the same sensor and the fusion tracking of the front radar to the sensor are stopped, and the whole calculated amount is reduced. And if the scene library judges that the scene library is the parking lot, the fusion module preferentially outputs the targets in the library bit lines at the two sides for fusion.
Further, if the scene library is judged to be high-speed, the scene is urbanized, and the panoramic and ultrasonic work independently. If the scene library judges that the vehicle is a parking lot, the fusion module performs preferential complementary fusion, if a competitive relationship exists, the ultrasonic is used as a main part, the vision is used as an auxiliary part, the fusion process adopts the ultrasonic wave to perform triangular distance measurement to obtain the coordinates of the obstacle, and the look-around is matched through the Euclidean distance calculation minimum value.
Wherein, the ultrasonic wave is the TOF working principle, the longer the target is, the longer the flight time is, the frequency cycle of the ultrasonic wave is very long under the condition, in order to realize the triangular ranging, at least two or three ultrasonic radars are required to hit the same target at the same time to obtain the coordinates of the obstacle, in other words, the ultrasonic error picking needs the multi-cycle detection, so the invention carries out the triangular ranging between the ultrasonic radar which sends out and receives only and the different sensors of the panoramic camera according to the installation positions of the camera and the ultrasonic radar by the panoramic assistance, namely after the visual detection that the target obtains the Cartesian coordinates of the target, the Cartesian coordinates are converted into the polar coordinates,
the method comprises the steps of calculating the radial distance of ultrasonic waves and the mounting distance of vision, visually converting the polar coordinate position of a barrier target, and constructing a trigonometric function to obtain high-precision positioning. Because the FOV of the all-around camera covers the FOV range of all ultrasonic waves, the fusion mode solves the problems that the traditional triangular distance measurement is long in period, cannot form triangular distance measurement under an overlapping area and cannot be classified. And the vision solves the problems of poor ultrasonic dynamic detection and inaccurate detection of multiple targets, irregular targets and large-volume targets, and the ultrasonic complements the visual distance measurement precision.
The fusion module can simultaneously calculate the visual and millimeter wave position information and also calculate the relative relationship of the two mapped to the pixel coordinates, and the coupling of the two is weighted by the pixel size, for example, the smaller the visual pixel is at a longer distance, the more trusted the matching under the pixel is, otherwise, the larger the target pixel is, the more trusted the matching is. If the scene library judges that the scene of the parking lot is the front radar and the forward looking, the sensing range of the front radar and the forward looking can be integrally reduced, the false detection and the false fusion of narrow scenes are avoided, and the calculated amount is reduced.
Referring to fig. 3, a main view is provided by fusing a front-view camera and a front millimeter wave radar at the front side of the vehicle, and it can be seen from fig. 3 that target detection is realized by an angle radar and a look-around camera at the side of the vehicle. Because blind areas exist at the side and the front and the back of an angular radar coverage area due to different radar FOV and installation angles, the invention realizes the detection of the obstacle at the side of the vehicle through visual fusion and target tracking prediction, and utilizes the ultrasonic radar to carry out blind compensation at the front and the back of the vehicle. And moreover, target detection is carried out on the side and the rear of the vehicle by utilizing the angle radar and the all-round looking camera in a fusion mode, so that effective target output can be stably provided for the side and the rear of the vehicle, and the angle radar is helped to quickly filter out the miscellaneous points which are strong in reflection and are not the target in complex scenes such as an underground garage. Meanwhile, the angular radar is used for detecting the speed of the target vehicle so as to help visually judge the dynamic and static problems of the target and determine whether peripheral obstacles interfere with the driving route. The angle radar of the present embodiment is a 77GHz short-range wide-angle millimeter wave radar.
Therefore, the whole fusion is composed of a forward part, a backward part and a lateral part, the vision is taken as a main part, the vision is responsible for providing the target quantity, and the radar is responsible for carrying out multi-target matching and tracking on the visual target. And tracking the forward direction by using a millimeter wave radar and a forward direction vision camera through Kalman filtering to remove jitter frames and radar virtual scenes, tracking a continuous target, and performing multi-target matching by using a Hungary algorithm. And then, the millimeter wave radar target is associated with the visual target, wherein the millimeter wave radar target comprises a world coordinate system and a pixel coordinate system (namely an image coordinate system). And carrying out data matching on the physical distance of the radar and the vision in the world coordinate system and the pixel position in the pixel coordinate system according to a strategy. Thus, the multi-target tracking matching shown in fig. 1 is that sensors of three different types, namely vision, ultrasound and radar, are respectively fused according to the range of action distance, specifically, a forward-looking camera is fused with a front millimeter wave radar, a look-around camera is fused with an ultrasonic radar, and the look-around camera is fused with an angle (millimeter wave) radar.
In the invention, distance information and target speed information of a world coordinate system, pixel coordinates and a bbox (bounding box) which are used as references are provided for a camera according to the inaccuracy of visual ranging and the multi-target tracking matching. The camera and the millimeter wave radar are subjected to minimum value matching in a world coordinate system, data of the millimeter wave radar are mapped to a pixel coordinate system and subjected to range matching with bbox and the pixel coordinate, and therefore authenticity and stability of target matching can be effectively improved, and data points are all in the bbox range after radar fusion as shown in figures 4 to 6. In addition, visual classification and width and height of the targets can be given to the related targets, so that the fused targets have visual classification and detection. The relative distance and speed provided by the millimeter wave radar are used as standard obstacle output formats. As shown in fig. 7 and 8, the frame in fig. 7 is the visual detection result, and the black spot of the human foot is the sensor fusion target position; the gray block 3 in fig. 8 is the radar output target and the remaining blocks are all radar targets (i.e., only block 3 in fig. 8 is the target detected in fig. 7, and the remaining blocks in fig. 8 are all targets detected by the radar). Of course, the sensor fusion target position may be selected according to the actually detected target, and fig. 7 only shows that the black spot of the human foot is selected as the fusion target position in this embodiment. Therefore, target classification and relative position information are obtained by using the targets for visual detection, all targets detected by the millimeter wave radar are used, the distance under the world coordinate system is taken as a reference, and maximum matching is carried out by a bipartite graph of the Hungarian algorithm. The position information acquired by the millimeter wave radar is mapped to the pixel coordinate system, and the position of the millimeter wave radar is converted to the pixel coordinate system during mapping, as shown in fig. 9 and 10. Therein, image processing involves four coordinate systems: the world coordinate system describes the position of the camera in a unit m; the object-lens-holder lens; xy is an image coordinate system, the optical center is an image terminal point and the unit is mm; uv is a pixel coordinate system, the origin point is the upper left corner of the image, and the unit pixel; p: one point in the world coordinate system is a real point in life; p: an imaged point of the point p in the image, coordinates (x, y) in the image coordinate system being (u, v) in the pixel coordinate system; f: camera focal length, equal to the distance o from oc, f | | | o-oc | |. As shown in fig. 11 and 12, if the target bbox of the camera is [ (0,100), (0,200), (100 ), (100,200) ], the pixel after the radar is transformed into the coordinate system is (80,150). That is, matching the radar coordinate system with the bbox matches the radar detected target in fig. 11 into the bbox coordinate system in fig. 12, as point (80,150). The radar coordinate system is matched as the same target in the bbox range, if a plurality of radar targets exist in the bbox range, the target is output by adopting minimum extremum matching, the world coordinate system matching is adopted in a short distance, and the pixel coordinate system matching is adopted in a long distance. The near and far distances represent the distance to the target from the fusion system. And combining different matching of the target in a world coordinate system and a pixel coordinate system, performing Kalman filtering, and outputting, wherein the target simultaneously has visual classification, radar position target speed information and the like. The minimum match is: and the camera and the millimeter wave radar perform Euclidean distance calculation according to space coordinates obtained by the camera and the millimeter wave radar in a world coordinate system, and the extreme value with the minimum distance is taken as a matching target by the plurality of targets. The matching of the bbox and the pixel coordinate ranges is as follows: the target provided by the vision has a bbox pixel attribute frame which comprises length, width and height, the target coordinate of the radar is converted into a pixel coordinate through coordinate conversion and an external reference calibration file, whether the pixel coordinate is in the bbox attribute frame is calculated through xy axis comparison, and if the matching is successful, the pixel coordinate is in the bbox attribute frame.
In the invention, the ultrasonic radar plays a supplementary role in the whole system, 12 ultrasonic radars are arranged on the vehicle in the embodiment, including 8 short-range ultrasonic radars and 4 long-range ultrasonic radars, as shown in fig. 13 to 15, and the position information and the outline information of the obstacle can be corrected by combining the triangular distance measurement of the ultrasonic radar and the target fusion of the all-round camera (the fusion mode is the same as that of the millimeter wave radar and the vision fusion mode), and meanwhile, the blind area problem of other sensors in a short distance is compensated. The ultrasonic radar obtains the space coordinates of the obstacle (namely the target under the vehicle coordinate system) after triangular distance measurement, so that strategy matching can be carried out on the space coordinates of the obstacle in the all-round view detection through different FOV field angles and action ranges, and then filtering and tracking processing are carried out, so that the integration of the space coordinates and the target under the vehicle coordinate system can be completed. Because the precision of the ultrasonic wave for measuring the distance of the obstacle in a short distance is higher than that of the ultrasonic wave for vision, the ultrasonic wave can be corrected with the position coordinate detected by the all-round camera, and therefore position precision supplement is provided for vision. Moreover, as shown in fig. 29 to fig. 31, if the obstacle detected by the panoramic camera is a static obstacle, the size of the volume width of the outline of the obstacle can be clearly known after the ultrasonic radar scans the obstacle, and a position and width information reference can be provided for the ob of the panoramic camera, so that the position and width information reference is complemented with the bbox width measured by the panoramic camera. Thus, the three sensors of the present invention, which operate in different ways, form a complementary fusion.
As shown in fig. 28, the policy matching of the present invention is: in the front, when the target is a long distance, the front-looking camera and the front millimeter wave radar are fused, when the target is a medium distance (the finger enters a front-looking blind area), the look-around camera and the front millimeter wave radar are fused, and when the target is a short distance, the look-around camera and the ultrasonic radar are fused; and in the side direction, the looking-around camera is fused with the angle millimeter wave radar when the target is a long distance, and the looking-around camera is fused with the ultrasonic radar when the target is a short distance. When the target is in the overlapping area of long distance, middle distance or short distance, several sensors can detect the target, and the target can be fused in a mode of vision as main and millimeter wave radar as auxiliary, and certainly, when the target is in the short distance, the ultrasonic radar is used as high priority, and vision is output in an auxiliary mode. Therefore, the specific correction method is divided into a method that when a fusion method with vision as a main method and a radar as an auxiliary method is adopted, the vision provides main target information such as the number, the position, the pixel position, the bbox, the length, the width, the height, the classification and the like of the target, the radar provides speed, acceleration, the position, the classification and the like as references, and according to different characteristics of the sensors, the radar can provide data with higher precision in the aspects of speed and position, so that when the fusion method is fused with the vision, higher weight is given when the position of a target fusion result is calculated. The mode of fusion of the panoramic vision and the ultrasonic vision is the same as that of the vision and the radar, main target information such as the number, the position, the pixel position, the bbox, the length, the width, the height, the classification and the like of the target is provided for the vision, and the ultrasonic vision is sensitive to the distance position, so that the position information and the target width are mainly provided. Moreover, since the ultrasonic waves have high priority (namely, the ultrasonic waves are main information, and other sensors provide auxiliary reference), even if the panoramic view and the radar do not detect the target, the ultrasonic waves can be output as a target result as long as the ultrasonic waves are stably detected, and therefore the ultrasonic radar plays a role in blind compensation and auxiliary panoramic view in the whole system.
As shown in fig. 16 and 27, the position information of the obstacle relative to the vehicle body can be obtained by the multi-radar triangulation algorithm, specifically: the distances of the 12 ultrasonic radars on the vehicle body in the x-direction and the y-direction with respect to the center of the rear axle of the vehicle are measured, and the relative distances with respect to the axle are obtained from the mounting positions. As shown in fig. 26, taking the adjacent ultrasonic radar 1 and the adjacent ultrasonic radar 2 as an example, the coordinates are (x1, y1) and (x2, y2), and the radial distances obtained by TOF measurement are d1 and d 2. Thus, knowing the two sides, the relative distance from the ultrasonic radar 1 to the ultrasonic radar 2 can be calculated according to the installation position, and the relative distance is rotated around the X axis under the vehicle coordinate system to obtain a:
Figure BDA0003587628350000181
and then the included angle beta between the ultrasonic radar and the relative distance is obtained through the cosine law as follows:
Figure BDA0003587628350000182
wherein l is the relative distance between two adjacent ultrasonic radars, and is obtained by installation and calibration;
the multipoint positioning can be carried out through the information, and the x and y of the obstacle target relative to the rear axle of the vehicle body are obtained as follows:
Figure BDA0003587628350000183
through the formula, the ultrasonic radars arranged in the front and the rear of the vehicle are triangulated to obtain the position information of the obstacle relative to the vehicle.
Referring to the ultrasonic static test scenario shown in fig. 21, the standard obstacles of class I are: PVC pipe 1 meter in length and 75mm in diameter, for close inspection standards (less than 1.8 m). In the detection test process, the test was performed using class I standard obstacles, the obstacles were placed in a grid of 10cm x 10cm, and the test was continued for 20s, and the test was passed without missed detection in this process, as shown in fig. 22. Referring to fig. 23, in a scene of the ultrasonic triangulation accuracy according to the present embodiment, the measurement mode is to use the outermost edge with the axle as the center as a measurement reference point. As shown in fig. 24 and 25, the vehicle coordinate system is moved down to the contact point on the outer edge of the vehicle, the green point in the figure is the origin point of (0,0), an obstacle is placed behind the vehicle for precision distance measurement, and the test data is shown in the following table 1:
Figure BDA0003587628350000184
Figure BDA0003587628350000191
TABLE 1
In the invention, the spatial target association comprises the construction of a vehicle coordinate system, a map coordinate system, a vehicle head coordinate system and a world coordinate system. Because different sensors have different properties, the outputs of the sensors are in different coordinate systems, and therefore the four coordinate systems are constructed for data intercommunication among the different sensors, the spatial correlation of output results is realized, and the output results are finally unified into one coordinate system, such as a vehicle coordinate system. When a vehicle coordinate system is constructed, obstacles are placed at various positions around a vehicle body, and the distance (in meters) between the target and the center of the rear axle of the vehicle is output, the distance is divided into two directions, and the coordinate system is shown in fig. 17, wherein the coordinates of a cone barrel are (2, 0), namely the directions from the center of the rear axle of the vehicle are 2 meters transversely and 0 meter longitudinally. Wherein, the coordinate is positive relative to the target on the right side of the vehicle rear axle center; the target to the left, the coordinate is negative; the coordinates of the front target are positive; the latter target, the axis is negative. As shown in fig. 18, the dotted line (including the coordinate system on the vehicle body) in the figure is a vehicle coordinate system, the solid line is a map coordinate system, when the map coordinate system is constructed, the current position of the vehicle (Slam coordinate) is located by using Slam, and the original relative position of the obstacle target in the vehicle coordinate system is rotated by the RT matrix to obtain distances Sx and Sy, and is mapped into the Slam global coordinate of the map, that is:
Sx=cos(yaw)*Оx-sin(yaw)*Оy;
Sy=-sin(yaw)*Оx+cos(yaw)*Оy;
map x — Slam coordinates + target relative distance (vehicle coordinate system RT rotation);
map y — Slam coordinates + target relative distance (vehicle coordinate system RT rotation);
wherein Map x/y represents the coordinate position of the obstacle under the global coordinate of SLAM; cox/oy represents the position of the obstacle under the vehicle coordinate system; Sx/Sy represents the distance position after RT conversion; yaw represents the heading at the current slam coordinate;
when the vehicle head coordinate system is constructed, the difference from the vehicle coordinate system is that the outer edge of the front engine cover of the vehicle is taken as the nearest point as a measuring point, so that the measuring point is lifted to the vehicle head from the rear axle of the vehicle, and the collision distance and the measurement are convenient to calculate.
When a world coordinate system is constructed, a point of the obstacle target closest to the center of the rear axle of the vehicle is measured, and the distance between the point and the center of the rear axle in the x direction and the distance between the point and the center of the rear axle in the y direction are recorded as (x direction and y direction)gt,ygt)。
Testing of the method of the present invention includes algorithm output and error calculation. When algorithm output is carried out, the distance measurement output of the obstacle of the target detection algorithm comprises an x-axis direction and a y-axis direction, a coordinate system of the distance measurement output is the same as a vehicle coordinate system, and the output is marked as (x)pr,ypr) (ii) a When error calculation is carried out, the relative distance average absolute error is used for calculating the distance measurement error, namely, the manhattan distance is used for measuring the distance measurement accuracy, and the calculation formula is as follows:
||x||1=∑|xi|
(xdiff,ydiff)=(||xpr-xgt||1,||ypr-ygt||1);
thus, the calculated error is also divided into two directions, i.e., xdiffError in the x-axis direction, ydiffThe error in the y-axis direction is shown in FIG. 19. Wherein X is the maximum error rate of the obstacle, XiA plurality of obstacle targets;
during measurement, the obstacle is placed at a fixed distance from the vehicle body to move the vehicle back and forth, and the distance between the obstacle and the vehicle body in the x direction is kept unchanged. Arranging obstacles behindOn the axis extension line, the distance between the initial position and the vehicle rear axis in the x direction is made 0m, and the distances of the target from the vehicle rear axis center in the x direction and the y direction are measured. For example, when the obstacle is 30m away from the vehicle body, as shown in fig. 20, the dot on the vehicle body is the center of the rear axle of the vehicle, the dot in front of the vehicle is the obstacle, the vehicle moves forward and backward, the distance in the target direction x is kept constant, and the distance in the y direction is different, the obstacle (x is measured)gt,ygt) 33.91m in the y direction, 0m in the x direction, 1.99m in the vehicle width and 3.91m from the rear axle to the head of the vehicle, as shown in the following table 2:
Figure BDA0003587628350000211
Figure BDA0003587628350000221
TABLE 2
The device of the present invention includes a storage medium and a processor, the storage medium stores a computer program, and the computer program realizes the above object detection method when executed by the processor.
In conclusion, the invention combines the advantages of the sensors in three different working modes for target level fusion, thereby integrating the advantages of visual classification, radar ranging and ultrasonic close range compensation blind area, complementing the defects and providing a stable and reliable triple redundant target detection mode for low-speed automatic driving.
The above description is only one embodiment of the present invention, and is not intended to limit the present invention, and it is apparent to those skilled in the art that various modifications and variations can be made in the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (13)

1. A method of target detection comprising the steps of:
a. acquiring information around the vehicle by using a camera, a millimeter wave radar and an ultrasonic radar, and determining a driving environment and a target distance;
b. respectively acquiring perception information by using a camera and a millimeter wave radar; carrying out classified retrieval by utilizing a camera and a millimeter wave radar; utilizing a camera to assist an ultrasonic radar to carry out triangular ranging;
c. and determining a fusion mode of information obtained by the camera, the millimeter wave radar and the ultrasonic radar according to the target distance and the driving environment, tracking and matching a plurality of targets, and outputting a fusion result.
2. The method according to claim 1, wherein determining the driving environment according to the perception information acquired by the camera and the driving speed of the vehicle comprises:
presetting a speed threshold value for vehicle running, detecting that the running environment is a highway area by the camera, and determining that the running environment is a high-speed environment when the running speed of the vehicle is greater than the speed threshold value; the camera detects that the driving environment is a road area, the driving speed of the vehicle is less than or equal to the speed threshold value, and the driving environment is determined to be a low-speed environment; the camera detects that the driving environment is an off-highway area, and determines that the driving environment is a low-speed environment;
presetting a first distance threshold and a second distance threshold of a target distance, wherein the target distance is greater than the second distance threshold, and determining that the target is a long distance; the target distance is greater than the first distance threshold and less than or equal to a second distance threshold, and the target is determined to be a middle distance; and determining that the target is a short distance when the target distance is less than or equal to a first distance threshold.
3. The method according to claim 2, wherein when the low-speed environment and the target are close distances, the fusion mode is that information obtained by a look-around camera and information obtained by an ultrasonic radar are fused; when the low-speed environment and the target are in a medium distance, the fusion mode is that information obtained by a look-around camera and information obtained by an angle millimeter wave radar are fused; and when the low-speed environment and the target are far, the fusion mode is to adopt the fusion of the information obtained by the front-view camera and the information obtained by the front millimeter wave radar.
4. The method according to claim 2, wherein the camera, the millimeter wave radar, and the ultrasonic radar operate independently when the high-speed environment, the target being a close range; when the high-speed environment and the target are in the middle distance, the fusion mode is that information obtained by a look-around camera and information obtained by an angle millimeter wave radar are fused; and when the high-speed environment and the target are long-distance, the fusion mode is that information obtained by a forward-looking camera and information obtained by a forward millimeter wave radar are fused.
5. The method of claim 3 or 4, wherein a primary field of view is provided on the front side of the vehicle using a forward looking camera and a forward millimeter wave radar;
detecting an obstacle at the side of the vehicle through visual fusion and target tracking prediction, and performing blind compensation by using an ultrasonic radar at the near front and near rear of the vehicle;
and detecting the target by using the angle radar and the all-round looking camera at the side and the rear of the vehicle, and detecting the target speed by using the angle radar.
6. The method according to claim 1, wherein in the step (b), a camera is used for object classification retrieval, including image preprocessing and detection tracking;
performing dynamic and static classification retrieval by using a millimeter wave radar, and filtering a low-confidence target;
and performing spatial target association on the targets detected by the camera and the millimeter wave radar, filtering the associable targets, and adding the unassociated targets to the container.
7. The method of claim 6, wherein spatial target association comprises constructing a vehicle coordinate system, a map coordinate system, a locomotive coordinate system, and a world coordinate system;
when a vehicle coordinate system is constructed, placing obstacles around a vehicle body, and outputting the distance from a target of the obstacle to the center of a rear axle of the vehicle;
when a map coordinate system is constructed, the current position of the vehicle is positioned by using slam, the original relative position of an obstacle target in the vehicle coordinate system is rotated through an RT matrix to obtain distances Sx and Sy, and the distances Sx and Sy are mapped into slam global coordinates of a map, wherein the mapping formula is as follows:
Sx=cos(yaw)*Оx-sin(yaw)*Оy;
Sy=-sin(yaw)*Оx+cos(yaw)*Оy;
map x equals Slam coordinate + Sx;
map y equals Slam coordinate + Sy;
wherein Map x/y represents the coordinate position of the obstacle under the global coordinate of SLAM; rox/oy represents a position of an obstacle under a vehicle coordinate system; Sx/Sy represents the distance position after RT conversion; yaw represents the heading at the current slam coordinate;
when a locomotive coordinate system is constructed, taking the outer edge of a front engine hood of a vehicle as a closest point as a measuring point;
when a world coordinate system is constructed, measuring a point of an obstacle target closest to the center of a rear axle of a vehicle, recording the distance between the point and the center of the rear axle in the x direction and the distance between the point and the center of the rear axle in the y direction, and respectively recording the distance as (x) directiongt,ygt)。
8. The method of claim 1, wherein in the step (b), the triangulation comprises:
measuring the distances of each ultrasonic radar on the vehicle body in the x direction and the y direction relative to the center of a rear axle of the vehicle, recording the coordinates of two adjacent ultrasonic radars as (x1, y1) and (x2, y2), and obtaining the radial distances d1 and d2 through TOF measurement;
and alpha is obtained by the relative distance between two adjacent ultrasonic radars and rotating around the X axis under the vehicle coordinate system:
Figure FDA0003587628340000031
and further obtaining an included angle beta between the ultrasonic radar and the relative distance as follows:
Figure FDA0003587628340000041
wherein l is the relative distance between two adjacent ultrasonic radars, and is obtained by installation and calibration;
and carrying out multipoint positioning to obtain the distances x and y of the obstacle target relative to the rear axle of the vehicle body as follows:
Figure FDA0003587628340000042
and carrying out triangulation positioning on the ultrasonic radars arranged at the front and the rear of the vehicle to obtain the position information of the obstacle relative to the vehicle.
9. The method of claim 1, wherein in step (c), the multiple target tracking matching is based on providing distance information and target speed information of world coordinate system for the camera, and pixel coordinates and bbox as reference;
the camera and the millimeter wave radar are subjected to minimum value matching in a world coordinate system, and data of the millimeter wave radar are mapped to a pixel coordinate system and subjected to range matching with bbox and the pixel coordinate;
assigning a visual classification and width and height to the associated target;
the relative distance and speed provided by the millimeter wave radar are used as standard obstacle output formats.
10. The method of claim 9, wherein the target classification and the relative position information are obtained by using visually detected targets, and the maximum matching is performed by using bipartite graphs of the Hungarian algorithm by using distances under a world coordinate system as reference for all targets detected by using millimeter wave radars;
mapping the position information acquired by the millimeter wave radar to a pixel coordinate system, and converting the position of the millimeter wave radar to the pixel coordinate system during mapping;
if a plurality of radar targets exist in the bbox range, matching the output targets by adopting a minimum extremum, matching by adopting a world coordinate system in a short distance, and matching by adopting a pixel coordinate system in a long distance;
and combining different matches of the target under a world coordinate system and a pixel coordinate system for output.
11. The method of claim 10, wherein in step (c), the matching of the target in different coordinate systems is followed by kalman filtering.
12. The method of claim 1, wherein testing of the method includes algorithm output and error calculation;
when algorithm output is carried out, the distance measurement output of the obstacle of the target detection algorithm comprises an x-axis direction and a y-axis direction, a coordinate system of the distance measurement output is the same as a vehicle coordinate system, and the output is marked as (x)pr,ypr);
When error calculation is carried out, the average absolute error of relative distances is used for calculating the distance measurement error, the accuracy of distance measurement is measured by utilizing the Manhattan distance, and the calculation formula is as follows:
Figure FDA0003587628340000051
wherein x isdiffError in the x-axis direction, ydiffError in the y-axis direction, X is the maximum error rate of the obstacle, XiA plurality of obstacle targets;
during measurement, placing the barrier at a fixed distance from the vehicle body to move the vehicle back and forth, and keeping the distance between the barrier and the vehicle body in the x direction unchanged;
an obstacle is disposed on the extension line of the rear axle such that the distance in the x direction from the initial position of the obstacle to the rear axle of the vehicle is 0m, and the distances in the x direction and the y direction from the target to the center of the rear axle of the vehicle are measured.
13. An apparatus comprising a storage medium and a processor, the storage medium storing a computer program, wherein the computer program, when executed by the processor, implements the object detection method of any one of claims 1-12.
CN202210369658.4A 2022-04-08 2022-04-08 Target detection method and device Pending CN114724110A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210369658.4A CN114724110A (en) 2022-04-08 2022-04-08 Target detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210369658.4A CN114724110A (en) 2022-04-08 2022-04-08 Target detection method and device

Publications (1)

Publication Number Publication Date
CN114724110A true CN114724110A (en) 2022-07-08

Family

ID=82242173

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210369658.4A Pending CN114724110A (en) 2022-04-08 2022-04-08 Target detection method and device

Country Status (1)

Country Link
CN (1) CN114724110A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115257717A (en) * 2022-08-09 2022-11-01 上海保隆汽车科技股份有限公司 Intelligent obstacle avoidance method and system for vehicle, medium, vehicle machine and vehicle
CN115542312A (en) * 2022-11-30 2022-12-30 苏州挚途科技有限公司 Multi-sensor association method and device
CN116148801A (en) * 2023-04-18 2023-05-23 深圳市佰誉达科技有限公司 Millimeter wave radar-based target detection method and system
CN116203554A (en) * 2023-05-06 2023-06-02 武汉煜炜光学科技有限公司 Environment point cloud data scanning method and system
CN117152197A (en) * 2023-10-30 2023-12-01 成都睿芯行科技有限公司 Method and system for determining tracking object and method and system for tracking
CN117631676A (en) * 2024-01-25 2024-03-01 上海伯镭智能科技有限公司 Method and device for automatically guiding unmanned vehicle in mining area to advance

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115257717A (en) * 2022-08-09 2022-11-01 上海保隆汽车科技股份有限公司 Intelligent obstacle avoidance method and system for vehicle, medium, vehicle machine and vehicle
CN115542312A (en) * 2022-11-30 2022-12-30 苏州挚途科技有限公司 Multi-sensor association method and device
CN116148801A (en) * 2023-04-18 2023-05-23 深圳市佰誉达科技有限公司 Millimeter wave radar-based target detection method and system
CN116203554A (en) * 2023-05-06 2023-06-02 武汉煜炜光学科技有限公司 Environment point cloud data scanning method and system
CN116203554B (en) * 2023-05-06 2023-07-07 武汉煜炜光学科技有限公司 Environment point cloud data scanning method and system
CN117152197A (en) * 2023-10-30 2023-12-01 成都睿芯行科技有限公司 Method and system for determining tracking object and method and system for tracking
CN117152197B (en) * 2023-10-30 2024-01-23 成都睿芯行科技有限公司 Method and system for determining tracking object and method and system for tracking
CN117631676A (en) * 2024-01-25 2024-03-01 上海伯镭智能科技有限公司 Method and device for automatically guiding unmanned vehicle in mining area to advance
CN117631676B (en) * 2024-01-25 2024-04-09 上海伯镭智能科技有限公司 Method and device for automatically guiding unmanned vehicle in mining area to advance

Similar Documents

Publication Publication Date Title
CN114724110A (en) Target detection method and device
CN109100741B (en) Target detection method based on 3D laser radar and image data
WO2022022694A1 (en) Method and system for sensing automated driving environment
CN109143207B (en) Laser radar internal reference precision verification method, device, equipment and medium
CN110531376B (en) Obstacle detection and tracking method for port unmanned vehicle
JP7090597B2 (en) Methods and systems for generating and using location reference data
JP5157067B2 (en) Automatic travel map creation device and automatic travel device.
US10705220B2 (en) System and method for ground and free-space detection
CN112180373B (en) Multi-sensor fusion intelligent parking system and method
US11620837B2 (en) Systems and methods for augmenting upright object detection
US10909395B2 (en) Object detection apparatus
CN111712731A (en) Target detection method and system and movable platform
CN113673282A (en) Target detection method and device
CN111291676A (en) Lane line detection method and device based on laser radar point cloud and camera image fusion and chip
CN110794406B (en) Multi-source sensor data fusion system and method
CN105404844A (en) Road boundary detection method based on multi-line laser radar
JPH05265547A (en) On-vehicle outside monitoring device
CN112346463B (en) Unmanned vehicle path planning method based on speed sampling
CN112997093B (en) Method and processing unit for determining information about objects in a vehicle environment
CN110197173B (en) Road edge detection method based on binocular vision
Pantilie et al. Real-time obstacle detection using dense stereo vision and dense optical flow
CN113743171A (en) Target detection method and device
CN114821526A (en) Obstacle three-dimensional frame detection method based on 4D millimeter wave radar point cloud
CN116051818A (en) Multi-sensor information fusion method of automatic driving system
CN115468576A (en) Automatic driving positioning method and system based on multi-mode data fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination