CN114782548A - Global image-based radar vision data calibration method, device, equipment and medium - Google Patents

Global image-based radar vision data calibration method, device, equipment and medium Download PDF

Info

Publication number
CN114782548A
CN114782548A CN202210420934.5A CN202210420934A CN114782548A CN 114782548 A CN114782548 A CN 114782548A CN 202210420934 A CN202210420934 A CN 202210420934A CN 114782548 A CN114782548 A CN 114782548A
Authority
CN
China
Prior art keywords
calibration
radar
point
image
global image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210420934.5A
Other languages
Chinese (zh)
Other versions
CN114782548B (en
Inventor
黄金叶
陈磊
陈予涵
陈予琦
吴维
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Qiyang Special Equipment Technology Engineering Co ltd
Original Assignee
Shenzhen Qiyang Special Equipment Technology Engineering Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Qiyang Special Equipment Technology Engineering Co ltd filed Critical Shenzhen Qiyang Special Equipment Technology Engineering Co ltd
Priority to CN202210420934.5A priority Critical patent/CN114782548B/en
Publication of CN114782548A publication Critical patent/CN114782548A/en
Application granted granted Critical
Publication of CN114782548B publication Critical patent/CN114782548B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/11Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
    • G06F17/12Simultaneous equations, e.g. systems of linear equations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Theoretical Computer Science (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Operations Research (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a radar vision data calibration method, a device, electronic equipment and a storage medium based on a global image, wherein a coordinate system corresponding to the global image of a data fusion area is used as a reference coordinate system, and coordinate calibration can be simultaneously carried out on a plurality of cameras and a plurality of radars, so that the problem that only data calibration of a single camera and a single radar can be realized in the traditional calibration method is solved, the marking process is greatly simplified, the marking efficiency is improved, and the data fusion of traffic targets in different directions at a road intersection is realized.

Description

Global image-based radar vision data calibration method, device, equipment and medium
Technical Field
The invention belongs to the technical field of road traffic, and particularly relates to a radar vision data calibration method, a radar vision data calibration device, equipment and a medium based on a global image.
Background
Along with the development of wisdom traffic technology, the proposition and the popularization of car road cooperative concept, drive test perception technique has obtained the rapid development, wherein, the drive test perception unit commonly used uses video detector and millimeter wave radar as the main, and the higher and higher demand that traffic monitoring proposed has been can't be satisfied to single sensor detection effect, consequently, the drive test technique of multiple sensor fusion is adopted to present stage more to improve the precision of drive test perception, provide abundanter perception data, thereby accelerate the car road and fall to the ground in coordination, and improve the security of traffic.
The radar information and the image information need to be calculated in the same space and time dimension, therefore, before data fusion, the coordinate system of a radar detection target needs to be converted into the image coordinate system collected by a camera, and simultaneously, the sampling frequencies of the millimeter wave radar and the camera are unified, wherein most of the traditional data calibration modes of the radar and the video camera are the reference coordinate system of the pixel coordinate system of the video camera or the data coordinate system of the radar, and the coordinate system of another device is mapped into the reference coordinate system by a multi-point marking method, so that the space mapping relationship between the radar and the video camera is obtained, and the following defects exist:
the reference coordinate system adopted by the traditional calibration method is not a global coordinate, the data calibration of only one radar and one camera can be realized, the simultaneous mapping of a plurality of cameras and a plurality of radar coordinates cannot be realized, however, in practical application, the intersection is under the synergistic action of a plurality of sensors (a plurality of cameras and a plurality of radars), so that the traditional calibration method cannot meet the data calibration of the plurality of sensors under the corresponding coordinate systems, and the data fusion of traffic targets in different directions at the intersection cannot be effectively carried out; therefore, it is urgent to provide a data calibration method capable of calibrating a plurality of cameras and a plurality of radars simultaneously.
Disclosure of Invention
The invention aims to provide a radar vision data calibration method, a device, equipment and a medium based on a global image, so as to solve the problem that the existing calibration method cannot realize simultaneous mapping of a plurality of cameras and a plurality of radar coordinates, so that data fusion of traffic targets in different directions at a road intersection cannot be effectively carried out.
In order to achieve the purpose, the invention adopts the following technical scheme:
in a first aspect, the present invention provides a method for calibrating radar vision data based on a global image, including:
acquiring a calibration image of a road intersection and a global image of a data fusion area in the road intersection, wherein the data fusion area is a common detection area of a camera and a radar at the road intersection, the global image is an overhead image of the data fusion area, and the calibration image is a road intersection image shot by the camera;
performing multi-point calibration on the calibration image to obtain at least four first calibration points, and determining a second calibration point which is the same as the characterization position of each first calibration point in the global image based on the at least four first calibration points, wherein each first calibration point is a pixel point which is used for characterizing a marker in a road intersection in the calibration image;
acquiring coordinates of each first calibration point based on the calibration image, and acquiring coordinates of each second calibration point based on the global image;
obtaining a spatial mapping matrix between the calibration image and the global image based on the coordinates of each first calibration point and the coordinates of each second calibration point;
acquiring the installation position of a radar at a road intersection, and obtaining a space mapping parameter between a scanning plane corresponding to the radar and the global image based on the installation position and the global image;
and calibrating the detection data of the radar and the detection data of the camera based on the spatial mapping matrix and the spatial mapping parameters to obtain global calibration data.
Based on the disclosure, the coordinate system corresponding to the global image of the data fusion area is taken as a reference coordinate system, wherein the data fusion area is a common detection area of the camera and the radar at the road intersection, and the global image is an overhead image of the data fusion area, so that the area where the global image is located can represent the intersection detection areas of all the cameras and the radar at the road intersection; then, at least 4 first calibration points are selected from the calibration image, the global image is subjected to multi-point labeling by means of the first calibration points, so that second calibration points with the same representation positions as the first calibration points are determined in the global image, mapping of the first calibration points in the calibration image in the global image is achieved, and then the spatial mapping relation between the calibration image and the global image can be obtained by means of the coordinate relation between the first calibration points and the second calibration points; similarly, for the plane where the radar is located, the spatial mapping parameters between the radar scanning plane and the global image are obtained by means of the installation position of the radar and the global image, so that the detection data of the camera can be mapped into the global image by using the spatial mapping matrix, and the detection data of the radar can be mapped into the global image by using the spatial mapping parameters, and therefore simultaneous mapping of a plurality of cameras and a plurality of radar coordinates is achieved.
Through the design, the coordinate system corresponding to the global image of the data fusion area is used as the reference coordinate system, and the coordinate calibration can be simultaneously carried out on the plurality of cameras and the plurality of radars, so that the problem that the data calibration of a single camera and a single radar can only be realized in the traditional calibration method is solved, the marking process is greatly simplified, the marking efficiency is improved, and the data fusion of traffic targets in different directions at the road intersection is realized.
In one possible design, obtaining a spatial mapping matrix between the calibration image and the global image based on the coordinates of each first calibration point and the coordinates of each second calibration point includes:
constructing a homography matrix of 3 x 3, wherein each element in the homography matrix is a parameter to be solved;
based on the coordinates of each first calibration point, the coordinates of each second calibration point and the homography matrix, a coordinate conversion equation set between a coordinate system corresponding to the calibration image and a coordinate system corresponding to the global image is constructed;
and solving the coordinate conversion equation set by using a singular value decomposition method to obtain the value of each parameter to be solved, so as to update the homography matrix by using the value of each parameter to be solved, and using the updated homography matrix as the space mapping matrix.
Based on the disclosure above, the present invention discloses a solution process of spatial mapping matrix, wherein the data in the calibration image is mapped into the global image, which is equivalent to mapping the coordinates corresponding to the camera into the world coordinate system, and multiplying the coordinates under the coordinate system corresponding to the camera by a projection matrix, the world coordinates can be obtained, therefore, the obtained space mapping matrix is the projection matrix, and after the coordinates of the calibration points which represent the same position of the road intersection in the calibration image and the global image are known, that is, a homography matrix (i.e., a projection matrix) is first constructed, then a coordinate transformation equation set is established using the first calibration coordinates, the second calibration coordinates and the homography matrix, and finally, by solving the equation set, the values of the elements in the homography matrix can be obtained, and therefore the space mapping matrix can be obtained after the values of the elements are obtained.
In one possible design, the system of coordinate transformation equations is:
Figure BDA0003606713130000031
in the above formula (1), h1,h2,...,h9For the parameter to be solved, xi,yiRespectively represent the abscissa and ordinate of the ith first calibration point, um,vmDenotes the abscissa and ordinate of the mth second calibration point, respectively, where i 1,2The total number of the first calibration points, M represents the total number of the second calibration points, N is more than or equal to 4, and M is more than or equal to 4.
Based on the above disclosure, the coordinates corresponding to the first calibration point and the second calibration point are substituted into the above formula (1) to obtain a plurality of equations, then each item in the plurality of equations is used to form a matrix to obtain a matrix equation, and finally, the matrix equation is solved by a singular value decomposition method to obtain the value of each parameter to be solved in the homography matrix.
In one possible design, obtaining a spatial mapping parameter between a scan plane corresponding to the radar and the global image based on the installation location and the global image includes:
performing radar position marking in the global image based on the installation position to obtain a radar position point;
determining a radar calibration lane in the global image based on the radar position point, wherein the radar calibration lane is located in a lane opposite to the lane where the radar position point is located and is parallel to the normal of an antenna array surface of the radar;
acquiring monitoring data of any vehicle by the radar, wherein the monitoring data comprises an azimuth angle of any vehicle relative to the radar;
based on the monitoring data and the radar position points, vehicle position marking is carried out in the global image to obtain the position points of any vehicle;
obtaining calibration parameters, wherein the calibration parameters are used for adjusting position points of any vehicle in a global image;
and carrying out position calibration on the position point of any vehicle by utilizing the radar calibration lane and the calibration parameters so as to obtain the space mapping parameters when the position point of any vehicle is adjusted into the radar calibration lane.
Based on the disclosure, the invention discloses a specific process of space mapping parameters, wherein a radar is marked in a global image according to the installation position of the radar, a calibration lane for calibrating the radar position is determined in the global image, monitoring data of the radar is introduced, namely detected vehicle data, and the radar can directly read an azimuth angle and coordinates of a vehicle under a scanning plane where the radar is located; and finally, calibrating the position of the vehicle according to the radar calibration lane and the calibration parameters, so that when the position point of the vehicle is adjusted to the radar calibration lane, the space mapping relation between the scanning plane corresponding to the radar and the global image is obtained, and the space mapping parameters are obtained.
In one possible design, determining a radar calibration lane in the global image based on the radar position point includes:
selecting four third calibration points in a lane opposite to the lane where the radar position point is located in the global image, wherein the four third calibration points are divided into two groups, each group of the third calibration points are connected to form a straight line parallel to the opposite lane, and the straight lines corresponding to the two groups of the third calibration points are parallel to each other;
and based on the straight lines corresponding to the two groups of third calibration points, taking the area between the straight lines corresponding to the two groups of third calibration points as the radar calibration lane.
In one possible design, the obtaining calibration parameters includes:
establishing a radar coordinate system by taking the radar position point as an original point, wherein the positive east direction of the original point is the positive direction of an x axis, and the positive north direction of the original point is the positive direction of a y axis;
determining an initial radar azimuth angle, an initial calibration abscissa and an initial calibration ordinate based on the radar coordinate system so as to form the calibration parameters by using the initial radar azimuth angle, the initial calibration abscissa and the initial calibration ordinate;
correspondingly, based on the radar calibration lane and the calibration parameters, performing position calibration on the position point of any vehicle to obtain the spatial mapping parameters when the position point of any vehicle is adjusted into the radar calibration lane, including:
adjusting the calibration parameters, and judging whether the position point of any vehicle in the global image moves into the radar calibration lane or not after the calibration parameters are adjusted each time;
if yes, moving a position point of any vehicle to a corresponding calibration parameter in the radar calibration lane as the space mapping parameter; otherwise, continuously adjusting the calibration parameters until the position point of any vehicle moves into the radar calibration lane.
Based on the disclosure, the initial values of the calibration parameters, namely the initial value of the radar azimuth angle, the initial value of the calibration abscissa and the initial value of the calibration ordinate, are determined by establishing a radar coordinate system, finally, the position point of any vehicle in the global image is changed by adjusting the values of the three parameters, and the adjusted calibration parameters can be obtained by continuous adjustment until the position point of any license plate moves into a radar calibration lane, and at the moment, the adjusted calibration parameters can be used as space mapping parameters.
In one possible design, calibrating the detection data of the radar and the detection data of the camera based on the spatial mapping matrix and the spatial mapping parameters to obtain global calibration data includes:
mapping the detection data of the camera to a global image coordinate system based on the spatial mapping matrix to obtain calibration data of the camera, wherein the global image coordinate system is a coordinate system corresponding to the global image; and
mapping the detection data of the radar to the global image coordinate system based on the space mapping parameters to obtain calibration data of the radar;
and forming the global calibration data by using the calibration data of the camera and the calibration data of the radar.
In a second aspect, the present invention provides a radar vision data calibration apparatus based on a global image, including:
the system comprises an image acquisition unit, a data fusion unit and a data fusion unit, wherein the image acquisition unit is used for acquiring a calibration image of a road intersection and a global image of a data fusion area in the road intersection, the data fusion area is a common detection area of a camera and a radar at the road intersection, the global image is an overhead image of the data fusion area, and the calibration image is a road intersection image shot by the camera;
the marking unit is used for carrying out multi-point marking on the marked image to obtain at least four first marked points, and determining a second marked point with the same representation position as each first marked point in the global image based on the at least four first marked points, wherein each first marked point is a pixel point used for representing a marked object at a road junction in the marked image;
a coordinate obtaining unit, configured to obtain coordinates of each first calibration point based on the calibration image, and obtain coordinates of each second calibration point based on the global image;
a coordinate mapping unit, configured to obtain a spatial mapping matrix between the calibration image and the global image based on the coordinate of each first calibration point and the coordinate of each second calibration point;
the coordinate mapping unit is further used for acquiring the installation position of the radar at the road intersection, and obtaining space mapping parameters between the scanning plane corresponding to the radar and the global image based on the installation position and the global image;
and the data calibration unit is used for calibrating the detection data of the radar and the detection data of the camera based on the spatial mapping matrix and the spatial mapping parameters to obtain global calibration data.
In a third aspect, the present invention provides another global image based radar vision data calibration apparatus, taking an apparatus as an electronic device as an example, including a memory, a processor and a transceiver, which are sequentially connected in a communication manner, where the memory is used to store a computer program, the transceiver is used to transmit and receive a message, and the processor is used to read the computer program and execute the global image based radar vision data calibration method as may be designed in any one of the first aspect or the first aspect.
In a fourth aspect, the present invention provides a storage medium having stored thereon instructions for executing the global image-based radar vision data calibration method as described in the first aspect or any one of the possible designs of the first aspect when the instructions are executed on a computer.
In a fifth aspect, the present invention provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method for global image based radar vision data scaling as described in the first aspect or any one of the possible designs of the first aspect.
Drawings
FIG. 1 is a schematic flow chart illustrating steps of a radar vision data calibration method based on a global image according to the present invention;
FIG. 2 is a calibration graph of a calibration image and a global image provided by the present invention;
FIG. 3 is a diagram of a radar calibration provided by the present invention;
fig. 4 is a schematic structural diagram of a radar vision data calibration device based on a global image according to the present invention;
fig. 5 is a schematic structural diagram of an electronic device provided in the present invention.
Detailed Description
The invention is further described with reference to the following figures and specific examples. It should be noted that the description of the embodiments is provided to help understanding of the present invention, but the present invention is not limited thereto. Specific structural and functional details disclosed herein are merely illustrative of example embodiments of the invention. This invention may, however, be embodied in many alternate forms and should not be construed as limited to the embodiments set forth herein.
It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of example embodiments of the present invention.
It should be understood that, for the term "and/or" as may appear herein, it is merely an associative relationship that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists independently, B exists independently, and A and B exist simultaneously; for the term "/and" as may appear herein, which describes another associative object relationship, it means that two relationships may exist, e.g., a/and B, may mean: a exists independently, and A and B exist independently; in addition, with respect to the character "/" which may appear herein, it generally means that the former and latter associated objects are in an "or" relationship.
Examples
Referring to fig. 1, the method for calibrating radar vision data based on a global image according to the first aspect of the present embodiment may calibrate coordinates of multiple cameras and multiple radars at the same time, so as to greatly simplify a calibration process and improve a calibration efficiency, where the calibration method may be, but is not limited to, operated on a calibration terminal side, and the calibration terminal may be, but is not limited to, a Personal Computer (PC), a tablet PC, a smart phone, and/or a Personal Digital Assistant (PDA), and it is understood that the foregoing execution subject does not constitute a limitation of the embodiment of the present application, and accordingly, the operation steps of the method are as shown in steps S1 to S6.
S1, obtaining a calibration image of a road intersection and a global image of a data fusion area in the road intersection, wherein the data fusion area is a common detection area of a camera and a radar at the road intersection, the global image is an overhead image of the data fusion area, and the calibration image is a road intersection image shot by the camera; when the method is applied specifically, the camera and the radar are installed on the electric warning rod at the road intersection, so that the camera can shoot the road intersection to obtain a calibration image, and the global image can be acquired by unmanned aerial vehicle aerial photography without limitation.
Optionally, referring to fig. 2, for example, the global image and the calibration image are both images from which traffic targets are removed, that is, all background images of a road intersection, and the traffic targets include pedestrians and/or vehicles, so that the existence of pedestrians and vehicles in the images can be avoided, thereby affecting the calibration of the coordinate system corresponding to the image captured by the camera and the coordinate system corresponding to the global image.
In this embodiment, the global image is a common detection area of the camera and the radar at the road junction, and therefore, it is equivalent to using the global image as a reference object for calibrating the camera and the radar, and therefore, the coordinate system corresponding to the global image is used as a reference coordinate system, so that the coordinate system corresponding to the image captured by the camera is converted into the reference coordinate system, and the coordinate system corresponding to the radar is converted into the reference coordinate system, where the calibration process of the coordinate system corresponding to the camera is shown in the following steps S2 to S4.
S2, performing multi-point calibration on the calibration image to obtain at least four first calibration points, and determining a second calibration point with the same representation position as each first calibration point in the global image based on the at least four first calibration points, wherein each first calibration point is a pixel point in the calibration image for representing a marker in the road junction; in a specific application, step S2 establishes a mapping relationship between the calibration image and the global image, optionally, the first calibration point may be, but is not limited to, a sign line and/or an electric warning rod of the road intersection.
Furthermore, when the mark lines are used as calibration objects, the end corners, the corners or the intersection points between the two mark lines of each mark line can be used as marking points, so that the mapping of the marking points in the global image is facilitated; referring to fig. 2, in this embodiment, 11 annotation points are selected from the calibration image, and the 11 annotation points are inflection points, end angles, or intersection points between two marker lines on the marker lines, and if the third annotation point is a connection point between a vertical line part and an arrow head part in a straight arrow, the first annotation point is an intersection point between two marker lines, and the eleventh annotation point is an end point at the left end of the tail part of the left-turn arrow, of course, the selection of the annotation points on the remaining marker lines is consistent with the foregoing example principle, and the principle is not described again.
After 11 marking points are selected from the calibration image, the 11 marking points can be used as first marking points, at this time, the 11 first marking points need to be mapped into the global image, which is substantially equivalent to that pixel points with the same representation position as each first marking point are determined in the global image and are used as second marking points, that is, the first marking points in the calibration image and the second marking points correspondingly determined in the global image have the same indicated position at the road intersection.
As shown in fig. 2, fig. 2(a) is a calibration image, fig. 2(b) is a global image, wherein an eleventh annotation point in the calibration image is an end point at the left end of the tail portion of the left-turn arrow, and corresponds to the global image, and then a left-turn arrow which is the same as the left-turn arrow where the eleventh annotation point in the calibration image is located is screened out from the global image, that is, a marking line of the same lane, the same left-turn direction and the same left-turn arrow is screened out, and the end point at the left end of the tail portion of the left-turn arrow screened out from the global image is used as a second annotation point (i.e., the annotation 11 in fig. 2 (b)) corresponding to the eleventh annotation point, so that the mapping of the eleventh annotation point in the calibration image in the global image can be realized; after the other first calibration points are mapped to the global image one by one according to the foregoing example method, the mapping between the calibration image and the global image can be completed, and the mapping relationship can be shown in fig. 2.
After each first calibration point in the calibration image is mapped to the global image to obtain a second calibration point corresponding to each first calibration point, the spatial mapping relationship between the first calibration point and the second calibration point can be obtained, as shown in steps S3 and S4.
S3, obtaining the coordinates of each first calibration point based on the calibration image, and obtaining the coordinates of each second calibration point based on the global image; in specific application, each first calibration point is substantially a pixel point in the calibration image, and therefore, the coordinate of the pixel point can be obtained according to the calibration image (for example, a coordinate system is established by taking the lower left of the calibration image as an origin, the width direction as the positive x-axis direction, and the height direction as the positive y-axis direction, so as to obtain the coordinate of each first calibration point); similarly, the coordinates of the second calibration point may also be derived based on the global image.
After the coordinates of each first calibration point and the coordinates of each second calibration point are obtained, the coordinates of the two calibration points can be used to calculate a transformation matrix for transforming the coordinate system corresponding to the camera-captured image to the coordinate system corresponding to the global image, as shown in step S4 below.
S4, obtaining a spatial mapping matrix between the calibration image and the global image based on the coordinates of each first calibration point and the coordinates of each second calibration point; in a specific application, the calibration image is an image captured by a camera, and the coordinate system corresponding to the calibration image is an image coordinate system, and the coordinate system corresponding to the global image is a world coordinate system (the coordinate system of the global image is expressed in the same manner as the coordinate system of the global image described below), so that the spatial mapping matrix between the calibration image and the global image is equivalent to projecting the image coordinate system onto the world coordinate system, and the spatial mapping matrix is a projection matrix between the two, wherein the calculation process is as the following steps S41 to S43.
S41, constructing a homography matrix of 3 x 3, wherein each element in the homography matrix is a parameter to be solved; in this embodiment, the homography matrix is a constructed projection matrix, wherein a conversion relation between the first calibration point and the second calibration point is as follows:
Figure BDA0003606713130000081
in the above formula, H is a homography matrix, xi,yiRespectively represent the abscissa and ordinate of the ith first calibration point, um,vmRespectively represent the abscissa and the ordinate of the mth second calibration point, and i corresponds to m one by one, namely when i is 1The term "M" is also 1, which means that the 1 st first calibration point is mapped into the global image to obtain a first calibration point corresponding to the first second calibration point, and of course, the other values have the same meaning, which is not described herein in more detail, and meanwhile, i is 1,2,. In addition, in specific applications, wmHas a value of 1.
Optionally, the exemplary homography matrix is:
Figure BDA0003606713130000082
therefore, the coordinate conversion equation set between the image coordinate system and the world coordinate system can be established by using the coordinates of the first calibration point, the coordinates of the second calibration point and the constructed homography matrix, as shown in the following step S42.
S42, constructing a coordinate conversion equation set between the coordinate system corresponding to the calibration image and the coordinate system corresponding to the global image based on the coordinate of each first calibration point, the coordinate of each second calibration point and the homography matrix; in specific application, the coordinate conversion equation set can be obtained by displaying the conversion relation, as shown in the following formula (1):
Figure BDA0003606713130000083
in the above formula (1), h1,h2,...,h9For each element of the homography matrix, i.e. for the parameter to be determined.
After the coordinate transformation system of equations is established, the system of equations can be solved to obtain the values of the elements in the homography matrix, as shown in step S43 below.
S43, solving the coordinate conversion equation set by using a singular value decomposition method to obtain a value of each parameter to be solved, so as to update the homography matrix by using the value of each parameter to be solved, and using the updated homography matrix as the space mapping matrix; when the method is applied specifically, the coordinates of the at least 4 first calibration points and the coordinates of the at least 4 second calibration points can be substituted into the formula (1) to obtain a plurality of equations, then, the equations can be used for forming a matrix to obtain a matrix equation, and finally, the matrix equation is solved through a singular value decomposition method to obtain the values of the parameters to be solved in the homography matrix.
For convenience of explanation, the foregoing calculation process is described below by taking 4 first calibration points and 4 second calibration points as examples:
when i and m are 1, the aforementioned formula (1) becomes:
Figure BDA0003606713130000091
when i and m are 2, the aforementioned formula (1) becomes:
Figure BDA0003606713130000092
when i and m are 3, the aforementioned formula (1) becomes:
Figure BDA0003606713130000093
when i and m are 4, the aforementioned formula (1) becomes:
Figure BDA0003606713130000094
therefore, the terms in the equation are extracted to form a matrix equation, which is shown in the following equation (2):
Figure BDA0003606713130000101
in the formula (2), the coordinates of each first calibration point and each second calibration point are known, so that the formula (2) can be solved by using a singular value decomposition method, namely, the value of each parameter to be solved can be obtained, and finally, the value of the parameter to be solved is substituted into the homography matrix, namely, the updating of the matrix can be completed, so that a space mapping matrix is obtained; in this embodiment, the singular value decomposition method is a common algorithm for matrix decomposition, and the principle thereof is not described in detail.
After obtaining the spatial mapping matrix of the calibration image and the global image, the spatial mapping relationship between the plane where the radar is located and the global image needs to be calculated, so as to realize the conversion between the coordinate system corresponding to the radar and the coordinate system corresponding to the global image based on the relationship, where the radar mapping process is as shown in the following step S5.
S5, acquiring the installation position of the radar at the road intersection, and obtaining a space mapping parameter between a scanning plane corresponding to the radar and the global image based on the installation position and the global image; in a specific application, the calculation process of the spatial mapping parameters may include, but is not limited to, the following steps S51 to S56.
S51, marking the radar position in the global image based on the installation position to obtain a radar position point; in a specific application, since the radar is installed on the electrical warning rod at the road intersection as described above, the installation electrical warning rod of the radar can be determined in the global image, and then a point is arbitrarily selected on the electrical warning rod for radar position marking, and the selected pixel point is a radar position point, as shown in fig. 3, where point a in fig. 3 is a radar position point.
After completing the radar position labeling in the global image, radar position calibration, that is, mapping of the radar in the global image, may be performed, specifically, in this embodiment, a radar calibration lane is determined in the global image, then calibration parameters and monitoring data of the radar on the vehicle are introduced to map the vehicle into the global image, and finally, the position of the vehicle is continuously corrected by adjusting the values of the calibration parameters until the position of the vehicle is corrected into the radar calibration lane, so that the mapping between the radar and the global image may be completed, so as to correct the position of the vehicle into the corresponding calibration parameters in the radar calibration lane, as spatial mapping parameters, where the mapping process is shown in the following steps S52 to S56.
S52, determining a radar calibration lane in the global image based on the radar position point, wherein the radar calibration lane is located in a lane opposite to the lane where the radar position point is located and is parallel to a normal of an antenna array surface of the radar; in a specific application, the process of determining the radar calibration lane is shown in steps S52a to S52b.
S52a, four third calibration points are selected from opposite lanes of the lanes where the radar position points are located in the global image, wherein the four third calibration points are divided into two groups, each group of the third calibration points are connected to form a straight line parallel to the opposite lanes, and the straight lines corresponding to the two groups of the third calibration points are parallel to each other.
S52b, based on the straight lines corresponding to the two groups of third calibration points, taking the area between the straight lines corresponding to the two groups of third calibration points as the radar calibration lane.
In specific application, a radar object lane can be selected, and pixel points on a lane line of the lane opposite to the radar are selected in the object lane to serve as third calibration points, so that the formed lane is parallel to a normal line of a radar antenna array surface, as shown in fig. 3, points B, C, D and E in fig. 3 are the third calibration points, wherein the points B and C are in one group, the points D and E are in one group, and form two straight lines respectively, and an area between the two straight lines forms the radar calibration lane.
After a radar calibration lane is determined in the global image, monitoring data of the radar can be introduced so as to complete the position calibration of the radar by means of the monitoring data, and therefore space mapping parameters are obtained after the calibration.
S53, acquiring monitoring data of the radar on any vehicle, wherein the monitoring data comprises an azimuth angle of any vehicle relative to the radar; in specific application, the monitoring data further includes longitude and latitude data of the vehicle, that is, coordinate data based on a radar scanning plane, wherein the azimuth angle and the coordinate data of the vehicle can be directly obtained by the radar, specifically, obtained according to a GPS (Global Positioning System) and an imu (inertial Measurement unit) inertial sensor on the radar.
After the monitoring data of any vehicle is obtained, the vehicle may be mapped to the global image based on the monitoring data to obtain a position point of the vehicle, so as to adjust the position point to obtain the spatial mapping parameter, specifically, the vehicle position labeling process is as shown in step S54 below.
S54, based on the monitoring data and the radar position points, carrying out vehicle position marking in the global image to obtain the position points of any vehicle; in specific application, after the azimuth angle of any vehicle is determined, the relative direction of any vehicle and the radar position point can be determined based on the radar position point, so that one point in the relative direction is selected as the position point of any vehicle; furthermore, because the longitude and latitude data of the vehicle are also known, the distance from the vehicle to the radar can be obtained, and therefore, after the relative direction and distance between any vehicle and a radar position point are determined, the position of the vehicle can be determined in the global image (the proportional relation between one pixel point in the global image and the actual unit distance (1m) can be set, the distance between any vehicle and the radar position point can be determined based on the pixel point in the global image, and the position of any vehicle in the global image can be accurately determined).
After any one of the vehicles is mapped into the global image, the spatial mapping parameter may be obtained, in this embodiment, the calibration parameter is obtained first, and then the position of any one of the vehicles in the global image is corrected by adjusting the value of the calibration parameter until the position of any one of the vehicles is corrected into the radar calibration lane, so as to obtain the spatial mapping parameter, as shown in the following steps S55 and S56.
S55, obtaining a calibration parameter, wherein the calibration parameter is used for determining a position point of any vehicle in the global image; in specific application, the calibration parameter obtaining process is as shown in the following steps S55a and S55b.
S55a, use radar position point is the original point, the true east direction of original point is x axle positive direction, the true north direction of original point is y axle positive direction, establishes the radar coordinate system.
S55b, determining an initial radar azimuth angle, an initial calibration abscissa and an initial calibration ordinate based on the radar coordinate system so as to form the calibration parameters by using the initial radar azimuth angle, the initial calibration abscissa and the initial calibration ordinate; in a specific application, the initial radar azimuth is an azimuth of the vehicle relative to the radar in the monitoring data, and the initial calibration abscissa and the initial calibration ordinate may be, but not limited to, preset values, and may be specifically set according to actual use.
After obtaining the calibration parameters, the position point of any vehicle may be corrected by adjusting the values of the initial radar azimuth, the initial calibration abscissa, and the initial calibration ordinate in the calibration parameters, so as to obtain the spatial mapping parameters when the position point of any vehicle is adjusted into the radar calibration lane, as shown in step S56 below.
S56, carrying out position calibration on the position point of any vehicle by utilizing the radar calibration lane and the calibration parameters so as to obtain the space mapping parameters when the position point of any vehicle is adjusted into the radar calibration lane; specifically, the adjustment process is as shown in step S56a and step S56b.
S56a, adjusting the calibration parameters, and after the calibration parameters are adjusted each time, judging whether the position point of any vehicle in the global image moves into the radar calibration lane.
S56b, if yes, taking a corresponding calibration parameter when the position point of any vehicle is moved to the radar calibration lane as the space mapping parameter; otherwise, continuing to adjust the calibration parameters until the position point of any vehicle moves into the radar calibration lane.
The principle of the foregoing steps S56a and S56b is: the calibration parameters are substantially conversion parameters between a radar-corresponding scanning plane and a global image, that is, conversion parameters between a radar-corresponding coordinate system and a global image-corresponding coordinate system, and specifically, coordinates of any vehicle in the global image can be calculated according to the calibration parameters and coordinate data of the vehicle under the radar scanning plane, where the calculation process is as follows: x0 and y0 represent the abscissa and ordinate of any vehicle in the radar scan plane, t is the initial radar azimuth angle in the calibration parameters, and Xc and Yc are the initial calibration abscissa and initial calibration ordinate, respectively, so that the value of the abscissa Xr of any vehicle in the global image is: xr — Xc + kx0cost-ky0sint, and the ordinate Yr of any vehicle in the global image has the value: yr is Yc + kx0sint + ky0cost, in the formula, k represents a conversion coefficient and is a constant value; therefore, the coordinate of any vehicle in the global image (hereinafter referred to as the vehicle global coordinate) can be updated every time the calibration parameter is adjusted, and according to the obtained vehicle global coordinate, a pixel point corresponding to the vehicle global coordinate can be determined in the global image, so that the position correction of any vehicle is completed, finally, the calibration parameter is continuously adjusted until the pixel point determined according to the vehicle global coordinate is positioned in the radar calibration lane, the adjustment can be stopped, and at the moment, the corresponding calibration parameter when the position point of any vehicle is positioned in the radar calibration lane can be used as the space mapping parameter.
Therefore, based on the foregoing step S5 and its sub-steps, a mapping between the coordinate system of the radar and the coordinate system of the global image can be obtained.
After the spatial mapping matrix between the coordinate system corresponding to the camera and the coordinate system corresponding to the global image and the spatial mapping parameters between the coordinate system in which the radar is located and the coordinate system in which the global image is located are obtained, the data calibration of the radar data and the camera data can be completed, as shown in the following step S6.
S6, calibrating the detection data of the radar and the detection data of the camera based on the spatial mapping matrix and the spatial mapping parameters to obtain global calibration data; in particular, the data calibration process is shown as step S61, step S62, and step S63.
S61, mapping the detection data of the camera to a global image coordinate system based on the space mapping matrix to obtain calibration data of the camera, wherein the global image coordinate system is a coordinate system corresponding to the global image.
And S62, mapping the detection data of the radar to the global image coordinate system based on the space mapping parameters to obtain the calibration data of the radar.
And S63, forming the global calibration data by using the calibration data of the camera and the calibration data of the radar.
When the method is applied specifically, the detection data of the camera is an image shot by the camera, the detection data is mapped to a global image coordinate system, coordinates of traffic targets (such as vehicles and/or pedestrians) in the image shot by the camera are firstly obtained, and then the coordinates are multiplied by a space mapping matrix, so that the coordinates of the traffic targets in the shot image in the global image are obtained; similarly, the same is true for the detection data of the radar, namely the coordinates of the traffic target in the scanning plane where the radar is located are read based on the radar, and then the coordinate point of the traffic target in the radar detection data in the global image is obtained based on the space mapping parameters; therefore, data of a plurality of cameras and a plurality of radars can be calibrated to a coordinate system at the same time, and data fusion of the plurality of radars and cameras at a large intersection can be rapidly achieved.
Therefore, through the radar vision data calibration method based on the global image described in detail in the steps S1 to S6, the coordinate system corresponding to the global image of the data fusion area is used as the reference coordinate system, and the coordinate calibration can be simultaneously performed on the multiple cameras and the multiple radars, so that the problem that only a single camera and a single radar can perform data calibration in the conventional calibration method is solved, the labeling process is greatly simplified, the labeling efficiency is improved, and the data fusion of traffic targets in different directions at the road intersection is realized.
As shown in fig. 4, a second aspect of this embodiment provides a hardware device for implementing the global image-based radar vision data calibration method described in the first aspect of this embodiment, including:
the system comprises an image acquisition unit, a data fusion area and a data fusion area, wherein the image acquisition unit is used for acquiring a calibration image of a road intersection and a global image of the data fusion area in the road intersection, the data fusion area is a common detection area of a camera and a radar at the road intersection, the global image is an overhead image of the data fusion area, and the calibration image is a road intersection image shot by the camera.
And the marking unit is used for carrying out multi-point marking on the marked image to obtain at least four first marked points, and determining a second marked point with the same representation position as each first marked point in the global image based on the at least four first marked points, wherein each first marked point is a pixel point used for representing a road junction marker in the marked image.
And the coordinate acquisition unit is used for acquiring the coordinate of each first calibration point based on the calibration image and acquiring the coordinate of each second calibration point based on the global image.
And the coordinate mapping unit is used for obtaining a spatial mapping matrix between the calibration image and the global image based on the coordinates of each first calibration point and the coordinates of each second calibration point.
And the coordinate mapping unit is further used for acquiring the installation position of the radar at the road intersection and obtaining the space mapping parameter between the scanning plane corresponding to the radar and the global image based on the installation position and the global image.
And the data calibration unit is used for calibrating the detection data of the radar and the detection data of the camera based on the spatial mapping matrix and the spatial mapping parameters to obtain global calibration data.
For the working process, the working details, and the technical effects of the hardware apparatus provided in this embodiment, reference may be made to the first aspect of the embodiment, which is not described herein again.
As shown in fig. 5, a third aspect of this embodiment provides another radar vision data calibration based on a global image, taking a device as an electronic device as an example, including: the radar vision data calibration method comprises a memory, a processor and a transceiver which are sequentially connected in a communication manner, wherein the memory is used for storing a computer program, the transceiver is used for transceiving messages, and the processor is used for reading the computer program and executing the radar vision data calibration method based on the global image according to the first aspect of the embodiment.
For example, the Memory may include, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Flash Memory (Flash Memory), a First In First Out (FIFO), a First In Last Out (FILO), and/or the like; in particular, the processor may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field Programmable Gate Array), and a PLA (Programmable Logic Array), and may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state and is also referred to as a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state.
In some embodiments, the processor may be integrated with a GPU (Graphics Processing Unit) which is responsible for rendering and drawing contents required to be displayed on the display screen, for example, the processor may not be limited to a processor using a model STM32F105 series microprocessor, a Reduced Instruction Set Computer (RISC) microprocessor, an architecture processor such as X86, or a processor integrated with an embedded neural Network Processor (NPU); the transceiver may be, but is not limited to, a wireless fidelity (WIFI) wireless transceiver, a bluetooth wireless transceiver, a General Packet Radio Service (GPRS) wireless transceiver, a ZigBee wireless transceiver (ieee 802.15.4 standard-based low power local area network protocol), a 3G transceiver, a 4G transceiver, and/or a 5G transceiver, etc. In addition, the device may also include, but is not limited to, a power module, a display screen, and other necessary components.
For the working process, the working details, and the technical effects of the electronic device provided in this embodiment, reference may be made to the first aspect of the embodiment, which is not described herein again.
A fourth aspect of the present embodiment provides a storage medium storing instructions including the global image based radar vision data calibration method according to the first aspect of the present embodiment, that is, the storage medium stores instructions that, when executed on a computer, perform the global image based radar vision data calibration method according to the first aspect.
The storage medium refers to a carrier for storing data, and may include, but is not limited to, a floppy disk, an optical disk, a hard disk, a flash Memory, a flash disk and/or a Memory Stick (Memory Stick), etc., and the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
For the working process, the working details, and the technical effects of the storage medium provided in this embodiment, reference may be made to the first aspect of the embodiment, which is not described herein again.
A fifth aspect of the present embodiment provides a computer program product containing instructions, which when executed on a computer, cause the computer to execute the method for calibrating radar vision data based on global images according to the first aspect of the present embodiment, wherein the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable apparatus.
Finally, it should be noted that: the above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A radar vision data calibration method based on a global image is characterized by comprising the following steps:
acquiring a calibration image of a road intersection and a global image of a data fusion area in the road intersection, wherein the data fusion area is a common detection area of a camera and a radar at the road intersection, the global image is an overhead image of the data fusion area, and the calibration image is a road intersection image shot by the camera;
performing multi-point calibration on the calibration image to obtain at least four first calibration points, and determining a second calibration point which has the same characterization position as each first calibration point in the global image based on the at least four first calibration points, wherein each first calibration point is a pixel point which is used for characterizing a standard substance in the road junction in the calibration image;
acquiring coordinates of each first calibration point based on the calibration image, and acquiring coordinates of each second calibration point based on the global image;
obtaining a spatial mapping matrix between the calibration image and the global image based on the coordinates of each first calibration point and the coordinates of each second calibration point;
acquiring the installation position of a radar at a road intersection, and obtaining a space mapping parameter between a scanning plane corresponding to the radar and the global image based on the installation position and the global image;
and calibrating the detection data of the radar and the detection data of the camera based on the spatial mapping matrix and the spatial mapping parameters to obtain global calibration data.
2. The method of claim 1, wherein deriving a spatial mapping matrix between the calibration image and the global image based on the coordinates of each first calibration point and the coordinates of each second calibration point comprises:
constructing a homography matrix of 3 x 3, wherein each element in the homography matrix is a parameter to be solved;
based on the coordinate of each first calibration point, the coordinate of each second calibration point and the homography matrix, constructing a coordinate conversion equation set between a coordinate system corresponding to the calibration image and a coordinate system corresponding to the global image;
and solving the coordinate conversion equation set by using a singular value decomposition method to obtain the value of each parameter to be solved, so as to update the homography matrix by using the value of each parameter to be solved, and using the updated homography matrix as the space mapping matrix.
3. The method of claim 2, wherein the system of coordinate transformation equations is:
Figure FDA0003606713120000011
in the above formula (1), h1,h2,...,h9For the parameter to be solved, xi,yiRespectively represent the abscissa and ordinate of the ith first calibration point, um,vmThe coordinate system is characterized by respectively representing the abscissa and the ordinate of the mth second calibration point, wherein i is 1,2, the.
4. The method of claim 1, wherein obtaining spatial mapping parameters between the scanning plane corresponding to the radar and the global image based on the installation location and the global image comprises:
performing radar position marking in the global image based on the installation position to obtain a radar position point;
determining a radar calibration lane in the global image based on the radar position point, wherein the radar calibration lane is located in a lane opposite to the lane where the radar position point is located and is parallel to a normal of an antenna array surface of the radar;
acquiring monitoring data of any vehicle by the radar, wherein the monitoring data comprises an azimuth angle of any vehicle relative to the radar;
based on the monitoring data and the radar position points, vehicle position labeling is carried out in the global image to obtain the position points of any vehicle;
obtaining a calibration parameter, wherein the calibration parameter is used for adjusting a position point of any vehicle in a global image;
and carrying out position calibration on the position point of any vehicle by utilizing the radar calibration lane and the calibration parameters so as to obtain the space mapping parameters when the position point of any vehicle is adjusted into the radar calibration lane.
5. The method of claim 4, wherein determining a radar calibration lane in the global image based on the radar location points comprises:
selecting four third calibration points in a lane opposite to the lane where the radar position point is located in the global image, wherein the four third calibration points are divided into two groups, each group of the third calibration points are connected to form a straight line parallel to the opposite lane, and the straight lines corresponding to the two groups of the third calibration points are parallel to each other;
and based on the straight lines corresponding to the two groups of third calibration points, taking the area between the straight lines corresponding to the two groups of third calibration points as the radar calibration lane.
6. The method of claim 4, wherein obtaining calibration parameters comprises:
establishing a radar coordinate system by taking the radar position point as an original point, wherein the positive east direction of the original point is the positive direction of an x axis, and the positive north direction of the original point is the positive direction of a y axis;
determining an initial radar azimuth angle, an initial calibration abscissa and an initial calibration ordinate based on the radar coordinate system so as to form the calibration parameters by using the initial radar azimuth angle, the initial calibration abscissa and the initial calibration ordinate;
correspondingly, based on the radar calibration lane and the calibration parameters, the position calibration of the position point of any vehicle is performed, so that when the position point of any vehicle is adjusted into the radar calibration lane, the spatial mapping parameters are obtained, which includes:
adjusting the calibration parameters, and judging whether the position point of any vehicle in the global image moves into the radar calibration lane or not after the calibration parameters are adjusted every time;
if yes, moving a position point of any vehicle to a corresponding calibration parameter in the radar calibration lane as the space mapping parameter; otherwise, continuously adjusting the calibration parameters until the position point of any vehicle moves into the radar calibration lane.
7. The method of claim 1, wherein calibrating the radar detection data and the camera detection data based on the spatial mapping matrix and the spatial mapping parameters to obtain global calibration data comprises:
mapping the detection data of the camera to a global image coordinate system based on the spatial mapping matrix to obtain calibration data of the camera, wherein the global image coordinate system is a coordinate system corresponding to the global image; and
mapping the detection data of the radar to the global image coordinate system based on the space mapping parameters to obtain calibration data of the radar;
and forming the global calibration data by using the calibration data of the camera and the calibration data of the radar.
8. A radar vision data calibration device based on a global image is characterized by comprising:
the system comprises an image acquisition unit, a data fusion unit and a data fusion unit, wherein the image acquisition unit is used for acquiring a calibration image of a road intersection and a global image of a data fusion area in the road intersection, the data fusion area is a common detection area of a camera and a radar at the road intersection, the global image is an overhead image of the data fusion area, and the calibration image is a road intersection image shot by the camera;
the marking unit is used for carrying out multi-point marking on the marked image to obtain at least four first marked points, and determining a second marked point with the same representation position as each first marked point in the global image based on the at least four first marked points, wherein each first marked point is a pixel point used for representing a marked object at a road junction in the marked image;
a coordinate obtaining unit, configured to obtain coordinates of each first calibration point based on the calibration image, and obtain coordinates of each second calibration point based on the global image;
a coordinate mapping unit, configured to obtain a spatial mapping matrix between the calibration image and the global image based on the coordinate of each first calibration point and the coordinate of each second calibration point;
the coordinate mapping unit is further used for acquiring the installation position of the radar at the road intersection, and obtaining space mapping parameters between the scanning plane corresponding to the radar and the global image based on the installation position and the global image;
and the data calibration unit is used for calibrating the detection data of the radar and the detection data of the camera based on the spatial mapping matrix and the spatial mapping parameters to obtain global calibration data.
9. An electronic device, comprising: the device comprises a memory, a processor and a transceiver which are sequentially connected in a communication manner, wherein the memory is used for storing a computer program, the transceiver is used for transceiving messages, and the processor is used for reading the computer program and executing the global image-based radar vision data calibration method as claimed in any one of claims 1 to 7.
10. A storage medium, wherein the storage medium stores instructions for executing the global image based radar vision data calibration method according to any one of claims 1 to 7 when the instructions are executed on a computer.
CN202210420934.5A 2022-04-20 2022-04-20 Global image-based radar data calibration method, device, equipment and medium Active CN114782548B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210420934.5A CN114782548B (en) 2022-04-20 2022-04-20 Global image-based radar data calibration method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210420934.5A CN114782548B (en) 2022-04-20 2022-04-20 Global image-based radar data calibration method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN114782548A true CN114782548A (en) 2022-07-22
CN114782548B CN114782548B (en) 2024-03-29

Family

ID=82431748

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210420934.5A Active CN114782548B (en) 2022-04-20 2022-04-20 Global image-based radar data calibration method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN114782548B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115571152A (en) * 2022-10-12 2023-01-06 深圳市旗扬特种装备技术工程有限公司 Safety early warning method, device, system, equipment and medium for non-motor vehicle
CN117541910A (en) * 2023-10-27 2024-02-09 北京市城市规划设计研究院 Fusion method and device for urban road multi-radar data

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020114234A1 (en) * 2018-12-05 2020-06-11 杭州海康威视数字技术股份有限公司 Target gps determination method and camera
WO2021115273A1 (en) * 2019-12-10 2021-06-17 华为技术有限公司 Communication method and apparatus
CN113012237A (en) * 2021-03-31 2021-06-22 武汉大学 Millimeter wave radar and video monitoring camera combined calibration method
CN113724333A (en) * 2020-05-26 2021-11-30 华为技术有限公司 Space calibration method and system of radar equipment
CN113744348A (en) * 2021-08-31 2021-12-03 南京慧尔视智能科技有限公司 Parameter calibration method and device and radar vision fusion detection equipment
CN114170303A (en) * 2021-10-19 2022-03-11 深圳市金溢科技股份有限公司 Combined calibration method, device, system, equipment and medium for radar and camera
CN114371475A (en) * 2021-12-28 2022-04-19 浙江大华技术股份有限公司 Method, system, equipment and computer storage medium for optimizing calibration parameters

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020114234A1 (en) * 2018-12-05 2020-06-11 杭州海康威视数字技术股份有限公司 Target gps determination method and camera
WO2021115273A1 (en) * 2019-12-10 2021-06-17 华为技术有限公司 Communication method and apparatus
CN113724333A (en) * 2020-05-26 2021-11-30 华为技术有限公司 Space calibration method and system of radar equipment
CN113012237A (en) * 2021-03-31 2021-06-22 武汉大学 Millimeter wave radar and video monitoring camera combined calibration method
CN113744348A (en) * 2021-08-31 2021-12-03 南京慧尔视智能科技有限公司 Parameter calibration method and device and radar vision fusion detection equipment
CN114170303A (en) * 2021-10-19 2022-03-11 深圳市金溢科技股份有限公司 Combined calibration method, device, system, equipment and medium for radar and camera
CN114371475A (en) * 2021-12-28 2022-04-19 浙江大华技术股份有限公司 Method, system, equipment and computer storage medium for optimizing calibration parameters

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
胡峰;胡春生;王省书;焦宏伟;: "成像激光雷达与摄像机外部位置关系的标定", 光学精密工程, no. 04, 15 April 2011 (2011-04-15), pages 234 - 239 *
苏小明;: "机器视觉和雷达数据技术下的变电站巡检机器人导航研究", 科技经济导刊, no. 07, 5 March 2020 (2020-03-05), pages 21 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115571152A (en) * 2022-10-12 2023-01-06 深圳市旗扬特种装备技术工程有限公司 Safety early warning method, device, system, equipment and medium for non-motor vehicle
CN115571152B (en) * 2022-10-12 2023-06-06 深圳市旗扬特种装备技术工程有限公司 Safety early warning method, device, system, equipment and medium for non-motor vehicle
CN117541910A (en) * 2023-10-27 2024-02-09 北京市城市规划设计研究院 Fusion method and device for urban road multi-radar data

Also Published As

Publication number Publication date
CN114782548B (en) 2024-03-29

Similar Documents

Publication Publication Date Title
CN110969663B (en) Static calibration method for external parameters of camera
US10659677B2 (en) Camera parameter set calculation apparatus, camera parameter set calculation method, and recording medium
WO2021037086A1 (en) Positioning method and apparatus
CN114782548B (en) Global image-based radar data calibration method, device, equipment and medium
CN114445592B (en) Bird's eye view semantic segmentation label generation method based on inverse perspective transformation and point cloud projection
CN112368756A (en) Method for calculating collision time of object and vehicle, calculating device and vehicle
CN112308927B (en) Fusion device of panoramic camera and laser radar and calibration method thereof
CN112233188A (en) Laser radar-based roof panoramic camera and calibration method thereof
CN116182805A (en) Homeland mapping method based on remote sensing image
EP4198901A1 (en) Camera extrinsic parameter calibration method and apparatus
CN112255604B (en) Method and device for judging accuracy of radar data and computer equipment
CN114091626B (en) True value detection method, device, equipment and storage medium
CN115588040A (en) System and method for counting and positioning coordinates based on full-view imaging points
US11403770B2 (en) Road surface area detection device
CN114565906A (en) Obstacle detection method, obstacle detection device, electronic device, and storage medium
CN116518961B (en) Method and device for determining global pose of large-scale fixed vision sensor
CN115571152B (en) Safety early warning method, device, system, equipment and medium for non-motor vehicle
CN216771967U (en) Multi-laser radar calibration system and unmanned mining vehicle
CN116229713A (en) Space alignment method and system for vehicle-road cooperation
CN115665553A (en) Automatic tracking method and device for unmanned aerial vehicle, electronic equipment and storage medium
CN110969664B (en) Dynamic calibration method for external parameters of camera
CN111736137A (en) LiDAR external parameter calibration method, system, computer equipment and readable storage medium
CN117784121B (en) Combined calibration method and system for road side sensor and electronic equipment
CN117036511B (en) Calibration method and device for multi-type sensor, computer equipment and storage medium
Mei et al. A Multi-sensor Information Fusion Method for Autonomous Vehicle Perception System

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant