CN111830470B - Combined calibration method and device, target object detection method, system and device - Google Patents

Combined calibration method and device, target object detection method, system and device Download PDF

Info

Publication number
CN111830470B
CN111830470B CN201910305730.5A CN201910305730A CN111830470B CN 111830470 B CN111830470 B CN 111830470B CN 201910305730 A CN201910305730 A CN 201910305730A CN 111830470 B CN111830470 B CN 111830470B
Authority
CN
China
Prior art keywords
image
coordinate system
information
target
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910305730.5A
Other languages
Chinese (zh)
Other versions
CN111830470A (en
Inventor
汤琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201910305730.5A priority Critical patent/CN111830470B/en
Publication of CN111830470A publication Critical patent/CN111830470A/en
Application granted granted Critical
Publication of CN111830470B publication Critical patent/CN111830470B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/40Means for monitoring or calibrating
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/865Combination of radar systems with lidar systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides a target object detection method and device, comprising the following steps: detecting image targets around a vehicle in an image acquired by a vehicle-mounted camera, acquiring coordinates of each image target in the image, determining the coordinates in a millimeter wave radar coordinate system according to the coordinates of the image target in the image and a transformation matrix for transforming the image coordinate system into the millimeter wave radar coordinate system for each image target, determining energy in an energy map obtained by transforming the image target in the image, determining the coordinates of the image target in the laser radar coordinate system according to the coordinate information of the image target in the image and the transformation matrix for transforming the image coordinate system into the laser radar coordinate system, and determining the height of the image target in a height map obtained by transforming the image into the laser radar coordinate system; and screening target objects meeting the conditions from the image targets according to the energy and the height so as to reject the target objects which are detected by mistake and improve the detection accuracy of the target objects.

Description

Combined calibration method and device, target object detection method, system and device
Technical Field
The present disclosure relates to the field of target detection technologies, and in particular, to a method and apparatus for joint calibration, and a method, system, and apparatus for target object detection.
Background
At present, in the field of intelligent monitoring or intelligent driving, a traditional algorithm or a deep learning method is generally adopted to detect certain specific target objects in video images acquired by a camera, and when the specific target objects are detected, early warning prompt or driving decision is carried out. However, the detection of the target object, which is implemented by the conventional algorithm or the deep learning method alone, may have a false detection situation, which may lead to a false prompt or a false decision.
Disclosure of Invention
In view of this, the present application provides a method, a system and a device for detecting a target object, so as to solve the problem of false detection in the detection manner in the related art.
According to a first aspect of embodiments of the present application, there is provided a joint calibration method, where the method is applied to a system including a millimeter wave radar, a laser radar, and a vehicle-mounted camera, where the millimeter wave radar, the laser radar, and the vehicle-mounted camera perform data acquisition in the same field of view area, and the method includes:
acquiring calibration parameters of M groups of calibration targets synchronously acquired by a millimeter wave radar, a laser radar and a vehicle-mounted camera; wherein M is a positive integer greater than or equal to 4, each set of calibration parameters comprises coordinate information of a calibration target at a preset position in a millimeter wave radar coordinate system, coordinate information in a laser radar coordinate system and coordinate information in an image coordinate system, and any two sets of calibration parameters are obtained when the calibration target is at different positions;
And determining element values of elements in a transformation matrix converted between the millimeter wave radar coordinate system, the laser radar coordinate system and the image coordinate system according to the M groups of calibration parameters, wherein the height information of coordinate information in the laser radar coordinate system in each group of calibration parameters is not used for calibrating the millimeter wave radar coordinate system, the laser radar coordinate system and the image coordinate system.
According to a second aspect of embodiments of the present application, there is provided a joint calibration device, where the device is applied to a system including a millimeter wave radar, a laser radar, and a vehicle-mounted camera, where the millimeter wave radar, the laser radar, and the vehicle-mounted camera perform data acquisition in the same field of view area, and the device includes:
the acquisition module is used for acquiring calibration parameters of M groups of calibration targets synchronously acquired by the millimeter wave radar, the laser radar and the vehicle-mounted camera; wherein M is a positive integer greater than or equal to 4, each set of calibration parameters comprises coordinate information of a calibration target at a preset position in a millimeter wave radar coordinate system, coordinate information in a laser radar coordinate system and coordinate information in an image coordinate system, and any two sets of calibration parameters are obtained when the calibration target is at different positions;
And the calibration module is used for determining element values of elements in a transformation matrix converted between the millimeter wave radar coordinate system, the laser radar coordinate system and the image coordinate system according to the M groups of calibration parameters, wherein the height information of the coordinate information in the laser radar coordinate system in each group of calibration parameters is not used for calibrating the millimeter wave radar coordinate system, the laser radar coordinate system and the image coordinate system.
By the application of the embodiment, the calibration parameters of the calibration targets which are synchronously acquired by the millimeter wave radar, the laser radar and the vehicle-mounted camera are acquired by more than 4 groups, and the calibration of the millimeter wave radar, the laser radar and the vehicle-mounted camera is realized under the condition of neglecting the height information in the laser radar coordinate system, so that the coordinate between any two of the millimeter wave radar, the laser radar and the vehicle-mounted camera can be mutually converted.
According to a third aspect of embodiments of the present application, there is provided a target object detection method, where the method is applied to the system described in the first aspect, and includes:
detecting all image targets around a vehicle in a video image acquired by the vehicle-mounted camera, and acquiring coordinate information of each image target in an image coordinate system of the video image;
For each image target, determining the coordinate information of the image target in the millimeter wave radar coordinate system according to the coordinate information of the image target in the image coordinate system and a transformation matrix for converting the image coordinate system into the millimeter wave radar coordinate system; according to the coordinate information of the image target in the millimeter wave radar coordinate system, determining the energy information of the image target in an energy diagram obtained after coordinate conversion of the video image;
for each image target, determining coordinate information of the image target in a laser radar coordinate system according to the coordinate information of the image target in the image coordinate system and a transformation matrix for converting the image coordinate system into the laser radar coordinate system; determining the height information of the image target in a height map obtained by converting coordinates of the video image according to the coordinate information of the image target in a laser radar coordinate system;
and according to the energy information and the height information, selecting image targets meeting the conditions from all image targets of the video image as target objects.
According to a fourth aspect of embodiments of the present application, there is provided a target object detection system, the system comprising: a vehicle-mounted camera, a laser radar, a millimeter wave radar, and a processor; the millimeter wave radar, the laser radar and the vehicle-mounted camera are all arranged on the vehicle, and the field of view areas for data acquisition of the millimeter wave radar, the laser radar and the vehicle-mounted camera are the same;
The vehicle-mounted camera is used for acquiring video images of the field of view area and sending the video images to the processor;
the laser radar is used for collecting coordinate information and height information of each point in the view field area in a laser radar coordinate system and sending the coordinate information and the height information to the processor;
the millimeter wave radar is used for collecting coordinate information and energy information of each point in the view field area in a laser radar coordinate system and sending the coordinate information and the energy information to the processor;
the processor is used for detecting all image targets around the vehicle in the video image and acquiring coordinate information of each image target in an image coordinate system of the video image; for each image target, determining the coordinate information of the image target in the millimeter wave radar coordinate system according to the coordinate information of the image target in the image coordinate system and a transformation matrix for converting the image coordinate system into the millimeter wave radar coordinate system; according to the coordinate information of the image target in the millimeter wave radar coordinate system, determining the energy information of the image target in an energy diagram obtained after coordinate conversion of the video image; for each image target, determining coordinate information of the image target in a laser radar coordinate system according to the coordinate information of the image target in the image coordinate system and a transformation matrix for converting the image coordinate system into the laser radar coordinate system; determining the height information of the image target in a height map obtained by converting coordinates of the video image according to the coordinate information of the image target in a laser radar coordinate system; and according to the energy information and the height information, selecting image targets meeting the conditions from all image targets of the video image as target objects.
According to a fifth aspect of embodiments of the present application, there is provided a target object detection apparatus, which is applied to the system described in the first aspect, and includes:
the detection module is used for detecting all image targets around the vehicle in the video image acquired by the vehicle-mounted camera and acquiring coordinate information of each image target in an image coordinate system of the video image;
the first determining module is used for determining the coordinate information of each image target in the millimeter wave radar coordinate system according to the coordinate information of the image target in the image coordinate system and the transformation matrix used for converting the image coordinate system into the millimeter wave radar coordinate system; according to the coordinate information of the image target in the millimeter wave radar coordinate system, determining the energy information of the image target in an energy diagram obtained after coordinate conversion of the video image;
the second determining module is used for determining the coordinate information of each image target in the laser radar coordinate system according to the coordinate information of the image target in the image coordinate system and the transformation matrix used for converting the image coordinate system into the laser radar coordinate system; determining the height information of the image target in a height map obtained by converting coordinates of the video image according to the coordinate information of the image target in a laser radar coordinate system;
And the screening module is used for screening out image targets meeting the conditions from all image targets of the video image as target objects according to the energy information and the height information.
By using the embodiment of the application, through setting the millimeter wave radar and the laser radar which are the same as the field of view area of the camera, after a plurality of image targets are detected in the video image acquired by the camera, energy information and height information of each image target are obtained through a transformation matrix for converting an image coordinate system into the millimeter wave radar coordinate system and a transformation matrix for converting the image coordinate system into the laser radar coordinate system, and then target objects meeting the conditions are screened out based on the energy information and the height information, so that the false detected target objects are removed, and the detection accuracy of the target objects is improved.
Drawings
FIG. 1 is a schematic diagram of a target frame detection of a video image according to an exemplary embodiment of the present application;
FIG. 2A is a flow chart of an embodiment of a joint calibration method according to an exemplary embodiment of the present application;
fig. 2B is a schematic diagram of a millimeter wave radar coordinate system according to the embodiment shown in fig. 2A;
FIG. 2C is a schematic diagram of a lidar coordinate system according to the embodiment shown in FIG. 2A;
FIG. 2D is a block diagram of a joint calibration system according to the embodiment of FIG. 2A;
FIG. 2E is a schematic view of the installation of a lidar, millimeter-wave radar and onboard camera according to the embodiment shown in FIG. 2A;
FIG. 3A is a flow chart of an embodiment of a method for target object detection according to an exemplary embodiment of the present application;
FIG. 3B is an energy diagram of the present application according to the embodiment shown in FIG. 3A;
FIG. 4 is a flow chart of an embodiment of another method for target object detection according to an exemplary embodiment of the present application;
FIG. 5 is a block diagram of a target object detection system according to an exemplary embodiment of the present application;
FIG. 6 is a block diagram of an embodiment of a target object detection apparatus according to an exemplary embodiment of the present application;
FIG. 7 is a block diagram of an embodiment of a joint calibration device according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
The terminology used in the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the present application. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first message may also be referred to as a second message, and similarly, a second message may also be referred to as a first message, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
In the field of intelligent security, it is assumed that an area is monitored by using a radar and a camera, and a target object to be detected is a moving object, as shown in fig. 1, and as a target frame detected by adopting a traditional algorithm or a deep learning method, it can be seen from fig. 1 that 5 target frames (004, 006, 007, 009, 010 in fig. 1) are all invalid target frames, so that it is known that the detection of the target object realized by a traditional algorithm or a deep learning method alone has false detection and has lower accuracy.
In order to solve the above problems, the present application proposes a target object detection method, by detecting all image targets around a vehicle in a video image collected by a vehicle-mounted camera, acquiring coordinate information of each image target in an image coordinate system of the video image, then, for each image target, determining coordinate information of the image target in a millimeter wave radar coordinate system according to coordinate information of the image target in the image coordinate system and a transformation matrix for transforming the image coordinate system into the millimeter wave radar coordinate system, determining energy information of the image target in an energy map obtained by transforming the image target in the millimeter wave radar coordinate system according to coordinate information of the image target in the video image, and simultaneously, for each image target, determining coordinate information of the image target in the laser radar coordinate system according to coordinate information of the image target in the image coordinate system and a transformation matrix for transforming the image coordinate system into the laser radar coordinate system, determining altitude information of the image target in a altitude map obtained by transforming the video image target in the coordinate system, and further, screening all the image targets satisfying the energy information and the altitude information as target objects. The millimeter wave radar, the laser radar and the vehicle-mounted camera are identical in view field area for data acquisition.
Based on the above description, after a plurality of image targets are detected in a video image acquired by a camera by setting millimeter wave radar and laser radar which are the same as the field of view of the camera, energy information and height information of each image target are obtained through a transformation matrix for converting an image coordinate system into the millimeter wave radar coordinate system and a transformation matrix for converting the image coordinate system into the laser radar coordinate system respectively, and then target objects meeting the conditions are screened out based on the energy information and the height information, so that false detected target objects are removed, and the detection accuracy of the target objects is improved.
Before the target object is detected, the conversion relations among the millimeter wave radar, the laser radar and the vehicle-mounted camera are required to be calibrated in a combined mode so as to be used for mutual conversion among the millimeter wave radar, the laser radar and the vehicle-mounted camera. The joint calibration method among the millimeter wave radar, the laser radar and the vehicle-mounted camera is described in detail below:
fig. 2A is a flowchart of an embodiment of a joint calibration method according to an exemplary embodiment of the present application, where the joint calibration method is applied to a system including a millimeter wave radar, a laser radar, and an on-vehicle camera, and as shown in fig. 2A, the joint calibration method includes the following steps:
Step 201: and acquiring calibration parameters of M groups of calibration targets synchronously acquired by the millimeter wave radar, the laser radar and the vehicle-mounted camera.
And each set of calibration parameters comprises coordinate information of a calibration target at a preset position in a millimeter wave radar coordinate system, coordinate information of a laser radar coordinate system and coordinate information of an image coordinate system, and any two sets of calibration parameters are obtained when the calibration target is at different positions.
The coordinate information of the calibration target in the millimeter wave radar coordinate system can be obtained by the radial distance and azimuth angle of the calibration target acquired by the millimeter wave radar relative to the millimeter wave radar; coordinate information of the calibration target in a laser radar coordinate system can be directly acquired by a laser radar; the coordinate information of the calibration target in the image coordinate system can be obtained through the calibration target image acquired by the vehicle-mounted camera.
In an exemplary scenario, as shown in fig. 2B, a millimeter wave radar coordinate system is shown, r and θ are radial distances and azimuth angles of a point a collected by the millimeter wave radar, and meanwhile, the millimeter wave radar also collects energy information D of the point a, where r and θ can obtain coordinate information of the point a in the millimeter wave radar coordinate system as (r×sin θ, r×cos θ). As shown in fig. 2C, the coordinate system of the laser radar is shown, x and y are the horizontal distance and the vertical distance of the P point collected by the laser radar, and z is the height information of the P point collected by the laser radar relative to the ground.
It should be noted that, the millimeter wave radar belongs to a radio detection radar, the accuracy of the energy information of the target point collected by the millimeter wave radar is high, and the millimeter wave radar also collects the movement speed of the target point, and the movement speed refers to the speed of the target point relative to the millimeter wave radar, so that the target with speed can be easily distinguished from the stationary target; the laser radar belongs to photoelectric detection radar, and the accuracy of the acquired height information is high.
Step 202: and determining element values of elements in a transformation matrix converted between the millimeter wave radar coordinate system, the laser radar coordinate system and the image coordinate system according to the M groups of calibration parameters.
The height information of the coordinate information in the laser radar coordinate system in each set of calibration parameters is not used for calibrating the millimeter wave radar coordinate system, the laser radar coordinate system and the image coordinate system.
In general, the conversion relationship of the coordinate points in the coordinate system of sensor a to the coordinate points in the coordinate system of sensor B satisfies the following equation:
Figure BDA0002029751070000081
where (x, y) represents the coordinate point of sensor a, (u, v) represents the coordinate point of sensor B, and T represents the transformation matrix of 3*3 to be solved.
Taking the transformation of (x, y) to u as an example, one can transform to the following equation:
Figure BDA0002029751070000082
From the above relation, t can be determined 33 Fixed to 1, thereby changing from 9 unknowns in the transformation matrix T to 8 unknowns. Since a pair of coordinate point values can provide 2 equations, at least four pairs of coordinate point values are needed to solve for 8 unknowns, i.e., M is a positive integer greater than or equal to 4. Assuming that n pairs of coordinate point values are provided, n is not less than 4, and the coordinate point values of the sensor A are (x 1 y 1 )...(x n y n ) The coordinate point values of the sensors B are (u) 1 v 1 )...(u n v n ) T is recorded i =[t i1 t i2 t i3 ]’,i=1,2,3,U=[u 1 u 2 … u n ],V=[v 1 v 2 … v n ],I 1×n =[1 1 … … 1],
Figure BDA0002029751070000083
Then the transformation matrix t= [ T ]' 1 T’ 2 T’ 3 ]' 8 unknowns in the transformation matrix T can be obtained by calculation by the following formula:
T 1 =(P′P) -1 P′U
T 2 =(P′P) -1 P′V
T 3 =(P′P) -1 P′I 1×n
wherein the superscript "'" represents the transpose and the superscript "-1" represents the inverse matrix.
Based on the above analysis, the transformation matrix determination process for the conversion of the image coordinate system to the millimeter wave radar coordinate system may be: coordinate information of a calibration target in a millimeter wave radar coordinate system and coordinate information in an image coordinate system are obtained from M groups of calibration parameters, and element values of elements of a transformation matrix to be solved for converting the image coordinate system into the millimeter wave radar coordinate system are obtained by inputting the coordinate information in the millimeter wave radar coordinate system and the coordinate information in the image coordinate system into an equation composed of radar coordinate points of the millimeter wave radar coordinate system, pixel coordinate points of the image coordinate system and the transformation matrix to be solved for converting the image coordinate system into the millimeter wave radar coordinate system.
The equation composed of the radar coordinate point of the millimeter wave radar coordinate system, the pixel coordinate point of the image coordinate system and the transformation matrix to be solved for converting the image coordinate system into the millimeter wave radar coordinate system is the above (equation 1): (x, y) represents a pixel coordinate point in the image coordinate system, (u, v) represents a radar coordinate point of the millimeter wave radar coordinate system, and T represents a transformation matrix to be solved for conversion of the image coordinate system into the millimeter wave radar coordinate system. By inputting (equation 1) the coordinate information in the millimeter wave radar coordinate system and the coordinate information in the image coordinate system, the element values of the respective elements of the transformation matrix for the conversion of the image coordinate system to the millimeter wave radar coordinate system can be obtained.
The transformation matrix determination process for the transformation of the image coordinate system to the lidar coordinate system may be: coordinate information of the calibration target in a laser radar coordinate system and coordinate information in an image coordinate system are obtained from the M groups of calibration parameters, and element values of elements of a transformation matrix to be solved for transformation of the image coordinate system into the laser radar coordinate system are obtained by inputting radar coordinate point values in the laser radar coordinate system and pixel coordinate point values in the image coordinate system into an equation composed of radar coordinate points of the laser radar coordinate system, pixel coordinate points of the image coordinate system and the transformation matrix to be solved for transformation of the image coordinate system into the laser radar coordinate system.
The equation composed of the radar coordinate point of the laser radar coordinate system, the pixel coordinate point of the image coordinate system and the transformation matrix to be solved for converting the image coordinate system into the laser radar coordinate system is the above (equation 1): (x, y) represents a pixel coordinate point of the image coordinate system, (u, v) represents a radar coordinate point of the lidar coordinate system, and T represents a transformation matrix to be solved for conversion of the image coordinate system into the lidar coordinate system. By inputting (equation 1) the acquired radar coordinate point values in the lidar coordinate system and the pixel coordinate point values in the image coordinate system, element values of the elements of the transformation matrix for the transformation of the image coordinate system to the lidar coordinate system can be obtained.
After the transformation matrix for converting the image coordinate system into the millimeter wave radar coordinate system and the transformation matrix for converting the image coordinate system into the laser radar coordinate system are obtained, according to different actual requirements, the transformation matrix among any two of the millimeter wave radar, the laser radar and the vehicle-mounted camera can be directly obtained without data acquisition and calibration.
Assume that a transformation matrix for converting an image coordinate system into a millimeter wave radar coordinate system is H 1 The transformation matrix for converting the image coordinate system into the laser radar coordinate system is H 2 Through H 1 And H 2 The method can obtain the following steps:
the transformation matrix for converting millimeter wave radar coordinate system into image coordinate system is as follows
Figure BDA0002029751070000101
The transformation matrix of the laser radar coordinate system to the image coordinate system is +.>
Figure BDA0002029751070000102
The transformation matrix for converting millimeter wave radar coordinate system into laser radar coordinate system is as follows
Figure BDA0002029751070000103
The transformation matrix of the laser radar coordinate system to millimeter wave radar coordinate system is +.>
Figure BDA0002029751070000104
Based on the above-described combined calibration process, as shown in fig. 2D, the combined calibration system structure between the laser radar, the millimeter wave radar and the vehicle-mounted camera includes an equipment information acquisition module, a mutual calibration module and a coordinate conversion module, where the equipment information acquisition module is configured to receive characteristic point data of the laser radar, characteristic point data of the millimeter wave radar and characteristic point data of the vehicle-mounted camera, that is, coordinate information of a calibration target at a preset position in a laser radar coordinate system, coordinate information in the millimeter wave radar coordinate system and coordinate information in an image coordinate system, and the mutual calibration module is configured to obtain a transformation matrix between any two sensor coordinates by using coordinate information of a calibration target in the laser radar coordinate system, coordinate information in the millimeter wave radar coordinate system and coordinate information in the image coordinate system, so that the coordinate conversion module can convert coordinates of an input sensor into coordinates of any two other sensors by using the corresponding transformation matrix, for example, after inputting a coordinate point in the image coordinate system into the coordinate conversion module, the coordinate conversion module can convert the coordinate point into the radar coordinate system by using the image coordinate system, and output the coordinate point can convert the coordinate into the coordinate system by using the millimeter wave coordinate system, and the coordinate conversion matrix can convert the coordinate point into the coordinate system.
In an exemplary scenario, as shown in fig. 2E, the millimeter wave radar is disposed at the head of the vehicle, the laser radar is disposed at the roof of the vehicle, the vehicle-mounted camera is disposed on the front windshield of the vehicle, the relative positions of the three sensors on the vehicle are always unchanged in the calibration process, and the field of view area for data acquisition is the same. When data acquisition is carried out, the calibration targets are placed at different positions in the field of view, and for the calibration targets at each position, coordinate information of the calibration targets in a laser radar coordinate system, coordinate information of the calibration targets in a millimeter wave radar coordinate system and coordinate information of the calibration targets in an image coordinate system are acquired at the same time. When the calibration is performed, the height information of the coordinate information in the laser radar coordinate system does not participate in the calibration calculation.
In the embodiment of the application, the calibration parameters of the calibration targets synchronously acquired by the millimeter wave radar, the laser radar and the vehicle-mounted camera are acquired by more than 4 groups, and the calibration of the millimeter wave radar, the laser radar and the vehicle-mounted camera is realized under the condition of neglecting the height information in the laser radar coordinate system, so that the coordinate conversion between any two of the millimeter wave radar, the laser radar and the vehicle-mounted camera is realized.
The method for detecting the target object provided in the present application is described in detail below with specific embodiments.
Fig. 3A is a flowchart of an embodiment of a target object detection method according to an exemplary embodiment of the present application, and the target object detection method may be applied to the system including millimeter wave radar, laser radar, and vehicle-mounted camera shown in fig. 2A and described above. Since the monitoring area of the vehicle is generally fixed, the type of the in-vehicle camera may be a gun type camera in which the lens direction remains unchanged.
As shown in fig. 3A, the target object detection method includes the steps of:
step 301: all image targets around the vehicle are detected from video images acquired by the vehicle-mounted camera, and coordinate information of each image target in an image coordinate system of the video images is acquired.
In an embodiment, the image target in the video image may be detected by a conventional detection algorithm, or may be detected by a deep learning method, which is not limited in this application.
Step 302: for each image target, determining the coordinate information of the image target in the millimeter wave radar coordinate system according to the coordinate information of the image target in the image coordinate system and a transformation matrix for transforming the image coordinate system into the millimeter wave radar coordinate system.
Step 303: and determining the energy information in the energy map obtained by the image target after the video image is subjected to coordinate conversion according to the coordinate information of the image target in the millimeter wave radar coordinate system.
In an embodiment, the millimeter wave radar can collect not only energy information of a target point, but also motion speed of the target point, and for a process of performing coordinate conversion on a video image to obtain an energy map, the method can combine characteristics of a target object to be detected to perform conversion, if the target object to be detected is a moving target object, in the process of performing coordinate conversion to obtain the energy map, if the motion speed corresponding to a millimeter wave radar coordinate point corresponding to a certain pixel coordinate point in the video image is not 0, the energy information corresponding to the millimeter wave radar coordinate point can be used as the energy information corresponding to the pixel coordinate point; if the motion speed corresponding to the millimeter wave radar coordinate point corresponding to a certain pixel coordinate point in the video image is 0, the energy information corresponding to the pixel coordinate point can be determined to be 0.
As shown in fig. 3B, the energy map obtained from the video image shown in fig. 1 is obtained because the image objects of 001, 003, and 008 in the video image are in motion. And thus brighter at the corresponding areas in the energy map.
Step 304: for each image target, determining the coordinate information of the image target in the laser radar coordinate system according to the coordinate information of the image target in the image coordinate system and a transformation matrix for converting the image coordinate system into the laser radar coordinate system.
Step 305: and determining the height information of the image target in the height map obtained by converting the coordinates of the video image according to the coordinate information of the image target in the laser radar coordinate system.
Step 306: and screening out image targets meeting the conditions from all image targets of the video image as target objects according to the energy information and the height information.
In an embodiment, the image targets meeting the energy condition may be screened from all the image targets according to the energy information, and then the image targets meeting the height condition may be screened from the screened image targets according to the height information as the target objects.
In another embodiment, the image targets meeting the height condition may be screened from all the image targets according to the height information, and then the image targets meeting the energy condition may be screened from the screened image targets according to the energy information.
For example, the energy condition may be that the energy information of the image object is larger than the energy information around the image object. The height condition may be that the height information of the image object is within a preset height section, which is a height section of the object set according to practical experience.
In the embodiment of the application, through setting millimeter wave radar and laser radar which are the same as the field of view area of the camera, after a plurality of image targets are detected in the video image acquired by the camera, energy information and height information of each image target are obtained through a transformation matrix for converting an image coordinate system into the millimeter wave radar coordinate system and a transformation matrix for converting the image coordinate system into the laser radar coordinate system, and then target objects meeting the conditions are screened out based on the energy information and the height information, so that false detected target objects are removed, and the detection accuracy of the target objects is improved.
Fig. 4 is a flowchart of an embodiment of another target object detection method according to an exemplary embodiment of the present application, which may be applied to the system including millimeter wave radar, lidar and an in-vehicle camera shown in fig. 2A and described above. As shown in fig. 4, the target object detection method includes the steps of:
Step 401: and acquiring a video image acquired by the vehicle-mounted camera.
Step 402: and converting the video image by using a transformation matrix converted from the image coordinate system to the millimeter wave radar coordinate system to obtain an energy diagram.
For each pixel coordinate point in the video image, a coordinate point of the pixel coordinate point in the millimeter wave radar coordinate system is calculated through a transformation matrix converted from the image coordinate system to the millimeter wave radar coordinate system, and then energy information corresponding to the coordinate point is used as a pixel value of the pixel coordinate point in the energy graph.
Step 403: and converting the video image by using a transformation matrix converted from the image coordinate system to the laser radar coordinate system to obtain a height map.
For each pixel coordinate point in the video image, a coordinate point of the pixel coordinate point corresponding to the laser radar coordinate system is calculated through a transformation matrix converted from the image coordinate system to the laser radar coordinate system, and then altitude information corresponding to the coordinate point is used as a pixel value of the pixel coordinate point in the altitude map.
Step 404: the video image, the energy map, and the height map are input to a trained target object detection model to detect a target object in the video image using the video image, the energy map, and the height map by the target object detection model.
Before step 404 is performed, a training process for the target object detection model may be performed by acquiring multiple sets of training samples, where the training samples include a video image acquired by the vehicle camera, an energy map obtained by coordinate conversion of the video image, and a height map obtained by coordinate conversion of the video image, and then training a network model for detecting the target object using the training samples.
The training mode of the network model can be trained by adopting a deep learning method. The input sample in the training process of the network model is not an independent video image, but an energy image and a height image, so that the detection precision of the network model is improved under the combined action of multi-dimensional information.
In the embodiment of the application, the target object detection is realized by inputting the video image, the energy map and the height map obtained after the video image is subjected to coordinate conversion into the target object detection model, and the detection accuracy of the target object in the video image can be improved and the false detection rate can be reduced under the action of the height map and the energy map.
FIG. 5 is a block diagram of a target object detection system according to an exemplary embodiment of the present application, the system comprising: an in-vehicle camera 501, a laser radar 502, a millimeter wave radar 503, and a processor 504; the millimeter wave radar 503, the laser radar 502 and the vehicle-mounted camera 501 are all arranged on the vehicle, and the field of view areas for data acquisition of the three are the same;
The vehicle-mounted camera 501 is configured to collect a video image of the field of view area and send the video image to the processor;
the laser radar 502 is configured to collect coordinate information and height information of each point in the field of view in a laser radar coordinate system, and send the coordinate information and the height information to the processor;
the millimeter wave radar 503 is configured to collect coordinate information and energy information of each point in the field of view in a laser radar coordinate system and send the coordinate information and energy information to the processor;
the processor 504 is configured to detect all image objects around the vehicle in the video image, and acquire coordinate information of each image object in an image coordinate system of the video image; for each image target, determining the coordinate information of the image target in the millimeter wave radar coordinate system according to the coordinate information of the image target in the image coordinate system and a transformation matrix for converting the image coordinate system into the millimeter wave radar coordinate system; according to the coordinate information of the image target in the millimeter wave radar coordinate system, determining the energy information of the image target in an energy diagram obtained after coordinate conversion of the video image; for each image target, determining coordinate information of the image target in a laser radar coordinate system according to the coordinate information of the image target in the image coordinate system and a transformation matrix for converting the image coordinate system into the laser radar coordinate system; determining the height information of the image target in a height map obtained by converting coordinates of the video image according to the coordinate information of the image target in a laser radar coordinate system; and according to the energy information and the height information, selecting image targets meeting the conditions from all image targets of the video image as target objects.
In an alternative implementation manner, the processor 504 is specifically configured to, in a process of screening all image targets of the video image for an image target object that meets a condition according to the energy information and the height information, screen all image targets of the video image for an image target object that meets an energy condition according to the energy information, and re-screen the screened image targets of the video image for an image target object that meets a height condition according to the height information; or screening the image targets meeting the height condition from all the image targets according to the height information, and rescreening the image targets meeting the energy condition from the screened image targets according to the energy information.
Fig. 6 is a block diagram of an embodiment of a target object detection apparatus according to an exemplary embodiment of the present application, where the target object detection apparatus may be applied to the processor of the system shown in fig. 4, and as shown in fig. 6, the target object detection apparatus includes:
the detection module 610 is configured to detect all image targets around a vehicle in a video image acquired by the vehicle-mounted camera, and acquire coordinate information of each image target in an image coordinate system of the video image;
A first determining module 620, configured to determine, for each image target, coordinate information of the image target in the millimeter wave radar coordinate system according to coordinate information of the image target in the image coordinate system and a transformation matrix for converting the image coordinate system into the millimeter wave radar coordinate system; according to the coordinate information of the image target in the millimeter wave radar coordinate system, determining the energy information of the image target in an energy diagram obtained after coordinate conversion of the video image;
a second determining module 630, configured to determine, for each image target, coordinate information of the image target in the laser radar coordinate system according to coordinate information of the image target in the image coordinate system and a transformation matrix for converting the image coordinate system into the laser radar coordinate system; determining the height information of the image target in a height map obtained by converting coordinates of the video image according to the coordinate information of the image target in a laser radar coordinate system;
and a screening module 640, configured to screen, according to the energy information and the altitude information, an image object that meets a condition from all image objects of the video image as a target object.
In an optional implementation manner, the screening module 640 is specifically configured to screen, according to the energy information, image objects that meet an energy condition from all image objects, and screen, according to the height information, image objects that meet a height condition from the screened image objects as target objects; or screening the image targets meeting the height condition from all the image targets according to the height information, and rescreening the image targets meeting the energy condition from the screened image targets according to the energy information.
FIG. 7 is a block diagram of an embodiment of a joint calibration device according to an exemplary embodiment of the present application, where the joint calibration device may be applied to the processor of the system shown in FIG. 4, and as shown in FIG. 7, the joint calibration device includes:
the acquisition module 710 is configured to acquire calibration parameters of M groups of calibration targets synchronously acquired by the millimeter wave radar, the laser radar and the vehicle-mounted camera; wherein M is a positive integer greater than or equal to 4, each set of calibration parameters comprises coordinate information of a calibration target at a preset position in a millimeter wave radar coordinate system, coordinate information in a laser radar coordinate system and coordinate information in an image coordinate system, and any two sets of calibration parameters are obtained when the calibration target is at different positions;
The calibration module 720 is configured to determine, according to the M sets of calibration parameters, element values of elements in a transformation matrix converted between the millimeter wave radar coordinate system, the laser radar coordinate system, and the image coordinate system, where height information of coordinate information in the laser radar coordinate system in each set of calibration parameters is not used for calibrating between the millimeter wave radar coordinate system, the laser radar coordinate system, and the image coordinate system.
In an optional implementation manner, the calibration module 720 is specifically configured to obtain, from the M sets of calibration parameters, coordinate information of the calibration target in a millimeter wave radar coordinate system and coordinate information of the calibration target in an image coordinate system, respectively; the coordinate information in the millimeter wave radar coordinate system and the coordinate information in the image coordinate system are input into an equation composed of radar coordinate points of the millimeter wave radar coordinate system, pixel coordinate points of the image coordinate system and a transformation matrix to be solved for converting the image coordinate system into the millimeter wave radar coordinate system, so that element values of elements of the transformation matrix to be solved for converting the image coordinate system into the millimeter wave radar coordinate system are obtained through solving.
In an optional implementation manner, the calibration module 720 is further specifically configured to obtain coordinate information of the calibration target in a laser radar coordinate system and coordinate information of the calibration target in an image coordinate system from the M sets of calibration parameters; the radar coordinate point values in the laser radar coordinate system and the pixel coordinate point values in the image coordinate system are input into an equation composed of radar coordinate points of the laser radar coordinate system, pixel coordinate points of the image coordinate system and a transformation matrix to be solved for converting the image coordinate system into the laser radar coordinate system, so that element values of elements of the transformation matrix to be solved for converting the image coordinate system into the laser radar coordinate system are obtained.
The implementation process of the functions and roles of each unit in the above device is specifically shown in the implementation process of the corresponding steps in the above method, and will not be described herein again.
For the device embodiments, reference is made to the description of the method embodiments for the relevant points, since they essentially correspond to the method embodiments. The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purposes of the present application. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The foregoing description of the preferred embodiments of the present invention is not intended to limit the invention to the precise form disclosed, and any modifications, equivalents, improvements and alternatives falling within the spirit and principles of the present invention are intended to be included within the scope of the present invention.

Claims (8)

1. A target object detection method, wherein the method is applied to a system comprising millimeter wave radar, laser radar and vehicle-mounted camera, and comprises the following steps:
detecting all image targets around a vehicle in a video image acquired by the vehicle-mounted camera, and acquiring coordinate information of each image target in an image coordinate system of the video image;
for each image target, determining the coordinate information of the image target in the millimeter wave radar coordinate system according to the coordinate information of the image target in the image coordinate system and a transformation matrix for converting the image coordinate system into the millimeter wave radar coordinate system; according to the coordinate information of the image target in the millimeter wave radar coordinate system, determining the energy information of the image target in an energy diagram obtained after coordinate conversion of the video image;
for each image target, determining coordinate information of the image target in a laser radar coordinate system according to the coordinate information of the image target in the image coordinate system and a transformation matrix for converting the image coordinate system into the laser radar coordinate system; determining the height information of the image target in a height map obtained by converting coordinates of the video image according to the coordinate information of the image target in a laser radar coordinate system;
And according to the energy information and the height information, selecting image targets meeting the conditions from all image targets of the video image as target objects.
2. The method according to claim 1, wherein selecting, as a target object, an image object satisfying a condition from among all image objects of the video image based on the energy information and the height information, comprises:
selecting image targets meeting the energy condition from all the image targets according to the energy information, and selecting the image targets meeting the height condition from the selected image targets according to the height information to serve as target objects; or alternatively, the process may be performed,
and screening the image targets meeting the height condition from all the image targets according to the height information, and screening the image targets meeting the energy condition from the screened image targets according to the energy information.
3. The method of claim 1, wherein the millimeter wave radar, the lidar, and the onboard camera are the same field of view in which data is acquired, the method further comprising:
acquiring calibration parameters of M groups of calibration targets synchronously acquired by a millimeter wave radar, a laser radar and a vehicle-mounted camera; wherein M is a positive integer greater than or equal to 4, each set of calibration parameters comprises coordinate information of a calibration target at a preset position in a millimeter wave radar coordinate system, coordinate information in a laser radar coordinate system and coordinate information in an image coordinate system, and any two sets of calibration parameters are obtained when the calibration target is at different positions;
And determining element values of elements in a transformation matrix converted between the millimeter wave radar coordinate system, the laser radar coordinate system and the image coordinate system according to the M groups of calibration parameters, wherein the height information of coordinate information in the laser radar coordinate system in each group of calibration parameters is not used for calibrating the millimeter wave radar coordinate system, the laser radar coordinate system and the image coordinate system.
4. A method according to claim 3, wherein determining the element values of the elements in the transformation matrix transformed between the millimeter wave radar coordinate system, the lidar coordinate system and the image coordinate system according to the M sets of calibration parameters comprises:
acquiring coordinate information of the calibration target in a millimeter wave radar coordinate system and coordinate information of the calibration target in an image coordinate system from the M groups of calibration parameters;
the coordinate information in the millimeter wave radar coordinate system and the coordinate information in the image coordinate system are input into an equation composed of radar coordinate points of the millimeter wave radar coordinate system, pixel coordinate points of the image coordinate system and a transformation matrix to be solved for converting the image coordinate system into the millimeter wave radar coordinate system, so that element values of elements of the transformation matrix to be solved for converting the image coordinate system into the millimeter wave radar coordinate system are obtained through solving.
5. A method according to claim 3, wherein determining the element values of the elements in the transformation matrix transformed between the millimeter wave radar coordinate system, the lidar coordinate system and the image coordinate system according to the M sets of calibration parameters comprises:
acquiring coordinate information of the calibration target in a laser radar coordinate system and coordinate information of the calibration target in an image coordinate system from the M groups of calibration parameters;
the radar coordinate point values in the laser radar coordinate system and the pixel coordinate point values in the image coordinate system are input into an equation composed of radar coordinate points of the laser radar coordinate system, pixel coordinate points of the image coordinate system and a transformation matrix to be solved for converting the image coordinate system into the laser radar coordinate system, so that element values of elements of the transformation matrix to be solved for converting the image coordinate system into the laser radar coordinate system are obtained.
6. A target object detection system, the system comprising: a vehicle-mounted camera, a laser radar, a millimeter wave radar, and a processor; the millimeter wave radar, the laser radar and the vehicle-mounted camera are all arranged on the vehicle, and the field of view areas for data acquisition of the millimeter wave radar, the laser radar and the vehicle-mounted camera are the same;
The vehicle-mounted camera is used for acquiring video images of the field of view area and sending the video images to the processor;
the laser radar is used for collecting coordinate information and height information of each point in the view field area in a laser radar coordinate system and sending the coordinate information and the height information to the processor;
the millimeter wave radar is used for collecting coordinate information and energy information of each point in the view field area in a laser radar coordinate system and sending the coordinate information and the energy information to the processor;
the processor is used for detecting all image targets around the vehicle in the video image and acquiring coordinate information of each image target in an image coordinate system of the video image; for each image target, determining the coordinate information of the image target in the millimeter wave radar coordinate system according to the coordinate information of the image target in the image coordinate system and a transformation matrix for converting the image coordinate system into the millimeter wave radar coordinate system; according to the coordinate information of the image target in the millimeter wave radar coordinate system, determining the energy information of the image target in an energy diagram obtained after coordinate conversion of the video image; for each image target, determining coordinate information of the image target in a laser radar coordinate system according to the coordinate information of the image target in the image coordinate system and a transformation matrix for converting the image coordinate system into the laser radar coordinate system; determining the height information of the image target in a height map obtained by converting coordinates of the video image according to the coordinate information of the image target in a laser radar coordinate system; and according to the energy information and the height information, selecting image targets meeting the conditions from all image targets of the video image as target objects.
7. The system according to claim 6, wherein the processor is specifically configured to, in a process of screening out all image targets of the video image as target targets that satisfy the condition according to the energy information and the height information, screen out all image targets that satisfy the energy condition according to the energy information, and re-screen out of the screened image targets as target targets that satisfy the height condition according to the height information; or screening the image targets meeting the height condition from all the image targets according to the height information, and rescreening the image targets meeting the energy condition from the screened image targets according to the energy information.
8. A target object detection apparatus, characterized in that the apparatus is applied to a system including a millimeter wave radar, a laser radar, and an in-vehicle camera, comprising:
the detection module is used for detecting all image targets around the vehicle in the video image acquired by the vehicle-mounted camera and acquiring coordinate information of each image target in an image coordinate system of the video image;
the first determining module is used for determining the coordinate information of each image target in the millimeter wave radar coordinate system according to the coordinate information of the image target in the image coordinate system and the transformation matrix used for converting the image coordinate system into the millimeter wave radar coordinate system; according to the coordinate information of the image target in the millimeter wave radar coordinate system, determining the energy information of the image target in an energy diagram obtained after coordinate conversion of the video image;
The second determining module is used for determining the coordinate information of each image target in the laser radar coordinate system according to the coordinate information of the image target in the image coordinate system and the transformation matrix used for converting the image coordinate system into the laser radar coordinate system; determining the height information of the image target in a height map obtained by converting coordinates of the video image according to the coordinate information of the image target in a laser radar coordinate system;
and the screening module is used for screening out image targets meeting the conditions from all image targets of the video image as target objects according to the energy information and the height information.
CN201910305730.5A 2019-04-16 2019-04-16 Combined calibration method and device, target object detection method, system and device Active CN111830470B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910305730.5A CN111830470B (en) 2019-04-16 2019-04-16 Combined calibration method and device, target object detection method, system and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910305730.5A CN111830470B (en) 2019-04-16 2019-04-16 Combined calibration method and device, target object detection method, system and device

Publications (2)

Publication Number Publication Date
CN111830470A CN111830470A (en) 2020-10-27
CN111830470B true CN111830470B (en) 2023-06-27

Family

ID=72915722

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910305730.5A Active CN111830470B (en) 2019-04-16 2019-04-16 Combined calibration method and device, target object detection method, system and device

Country Status (1)

Country Link
CN (1) CN111830470B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112162252B (en) * 2020-09-25 2023-07-18 南昌航空大学 Data calibration method for millimeter wave radar and visible light sensor
CN113110091A (en) * 2021-05-10 2021-07-13 深圳绿米联创科技有限公司 Smart home control method, display method, system, device and electronic equipment
CN114092916B (en) * 2021-11-26 2023-07-18 阿波罗智联(北京)科技有限公司 Image processing method, device, electronic equipment, automatic driving vehicle and medium
CN114779188B (en) * 2022-01-24 2023-11-03 南京慧尔视智能科技有限公司 Method, device, equipment and medium for evaluating calibration effect

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000329852A (en) * 1999-05-17 2000-11-30 Nissan Motor Co Ltd Obstacle recognition device
JP2008186343A (en) * 2007-01-31 2008-08-14 Toyota Motor Corp Object detection device
CN102508246A (en) * 2011-10-13 2012-06-20 吉林大学 Method for detecting and tracking obstacles in front of vehicle

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008172441A (en) * 2007-01-10 2008-07-24 Omron Corp Detection device, method, and program
CN101975951B (en) * 2010-06-09 2013-03-20 北京理工大学 Field environment barrier detection method fusing distance and image information
CN105758426B (en) * 2016-02-19 2019-07-26 深圳市杉川机器人有限公司 The combined calibrating method of the multisensor of mobile robot
CN106405555B (en) * 2016-09-23 2019-01-01 百度在线网络技术(北京)有限公司 Obstacle detection method and device for Vehicular radar system
CN108020825B (en) * 2016-11-03 2021-02-19 岭纬公司 Fusion calibration system and method for laser radar, laser camera and video camera
CN109215083B (en) * 2017-07-06 2021-08-31 华为技术有限公司 Method and device for calibrating external parameters of vehicle-mounted sensor
CN108226906B (en) * 2017-11-29 2019-11-26 深圳市易成自动驾驶技术有限公司 A kind of scaling method, device and computer readable storage medium
CN108509972A (en) * 2018-01-16 2018-09-07 天津大学 A kind of barrier feature extracting method based on millimeter wave and laser radar
CN109143241A (en) * 2018-07-26 2019-01-04 清华大学苏州汽车研究院(吴江) The fusion method and system of radar data and image data
CN109472831A (en) * 2018-11-19 2019-03-15 东南大学 Obstacle recognition range-measurement system and method towards road roller work progress
CN109598765B (en) * 2018-12-21 2023-01-03 浙江大学 Monocular camera and millimeter wave radar external parameter combined calibration method based on spherical calibration object
CN109615870A (en) * 2018-12-29 2019-04-12 南京慧尔视智能科技有限公司 A kind of traffic detection system based on millimetre-wave radar and video

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000329852A (en) * 1999-05-17 2000-11-30 Nissan Motor Co Ltd Obstacle recognition device
JP2008186343A (en) * 2007-01-31 2008-08-14 Toyota Motor Corp Object detection device
CN102508246A (en) * 2011-10-13 2012-06-20 吉林大学 Method for detecting and tracking obstacles in front of vehicle

Also Published As

Publication number Publication date
CN111830470A (en) 2020-10-27

Similar Documents

Publication Publication Date Title
CN111830470B (en) Combined calibration method and device, target object detection method, system and device
CN110363158B (en) Millimeter wave radar and visual cooperative target detection and identification method based on neural network
CN113359097B (en) Millimeter wave radar and camera combined calibration method
CN111045000A (en) Monitoring system and method
CN112116031B (en) Target fusion method, system, vehicle and storage medium based on road side equipment
CN104134354A (en) Traffic monitoring system for speed measurement and assignment of moving vehicles in a multi-target recording module
CN107144839A (en) Pass through the long object of sensor fusion detection
KR101735557B1 (en) System and Method for Collecting Traffic Information Using Real time Object Detection
CN108732573B (en) Parking space detection method and parking space detection system
CN111288910A (en) Tramcar trapezoidal turnout deformation monitoring system and method
CN109598947A (en) A kind of vehicle identification method and system
CN112348882A (en) Low-altitude target tracking information fusion method and system based on multi-source detector
CN116448773B (en) Pavement disease detection method and system with image-vibration characteristics fused
CN110796360A (en) Fixed traffic detection source multi-scale data fusion method
CN115588040A (en) System and method for counting and positioning coordinates based on full-view imaging points
CN112488022B (en) Method, device and system for monitoring panoramic view
CN111538008A (en) Transformation matrix determining method, system and device
CN115909285A (en) Radar and video signal fused vehicle tracking method
Rachman et al. Camera Self-Calibration: Deep Learning from Driving Scenes
CN110969875B (en) Method and system for road intersection traffic management
CN113673105A (en) Design method of true value comparison strategy
CN114169355A (en) Information acquisition method and device, millimeter wave radar, equipment and storage medium
CN111427038A (en) Target identification method based on vision and 77GHz radar in garage environment
CN220019916U (en) Human body detection system
WO2021106297A1 (en) Provision device, vehicle management device, vehicle management system, vehicle management method, and vehicle management program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant