CN115015909A - Radar data and video data fusion method and system based on perspective transformation - Google Patents
Radar data and video data fusion method and system based on perspective transformation Download PDFInfo
- Publication number
- CN115015909A CN115015909A CN202210510405.4A CN202210510405A CN115015909A CN 115015909 A CN115015909 A CN 115015909A CN 202210510405 A CN202210510405 A CN 202210510405A CN 115015909 A CN115015909 A CN 115015909A
- Authority
- CN
- China
- Prior art keywords
- coordinate
- perspective transformation
- transformation
- lane line
- pixel coordinates
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/86—Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
- G01S13/867—Combination of radar systems with cameras
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Mathematical Optimization (AREA)
- Computational Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Mathematical Analysis (AREA)
- Computer Networks & Wireless Communication (AREA)
- Data Mining & Analysis (AREA)
- Pure & Applied Mathematics (AREA)
- Algebra (AREA)
- Electromagnetism (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Radar Systems Or Details Thereof (AREA)
Abstract
The invention discloses a method and a system for fusing radar data and video data based on perspective transformation, which relate to the field of intelligent intersection monitoring and comprise the following steps: position coordinate transformation parameters are obtained by acquiring coordinate perspective transformation coefficients of a monitored road area in advance, acquiring position coordinates of a preset number of vehicle targets on the aerial view through the radar sensor and acquiring pixel coordinates of the preset number of vehicle targets after perspective transformation, and when the vehicle targets exist in an image of the camera monitored area, the position information of the vehicle target can be acquired in real time according to the pixel coordinates of the vehicle target in the image, matching the time between the camera data and the radar data of each target in the monitored road area according to the position information of the vehicle target on the aerial view, which is acquired by the radar sensor, therefore, the data which is monitored by the camera and actually takes meters as units can be obtained in real time, the accuracy and the efficiency of intersection monitoring management based on the radar and the camera are improved, and the integration of the camera data and the radar data is realized.
Description
Technical Field
The invention relates to the field of intelligent intersection monitoring, in particular to a radar data and video data fusion method and system based on perspective transformation.
Background
With the continuous development of intersection management technology, at present, when planning and managing intersection lane lines, the lane line positions are usually obtained by performing recognition analysis based on images acquired by a camera. In the intersection vehicle holographic image display and tracking of radar data and video data fusion, the position information of a lane line in video tracking data and the position information of a lane line in radar data need to be fused, however, because the data collected by a camera is data projected to a phase plane, the data of a plane cannot be obtained, and meanwhile, because the unit of the data collected by the camera is a pixel, the data monitored by the radar cannot be obtained actually in a meter unit, and further, when planning and tracking management are carried out on the lane line and a vehicle target, the use of a fusion algorithm is greatly limited.
Disclosure of Invention
In order to solve the technical problems, the invention provides a method and a system for fusing radar data and video data based on perspective transformation, which can solve the problem that the use of a fusion algorithm is greatly limited when lane line planning management and vehicle target tracking management are performed on the basis of a multi-sensor and the data monitored by a camera in a meter unit are not obtained when lane line planning management and vehicle target tracking management are performed at present.
To achieve the above object, in one aspect, the present invention provides a method for fusing radar data and video data based on perspective transformation, the method comprising:
acquiring pixel coordinates after perspective transformation according to pixel coordinates of preset number position points in a preset rectangular area of the monitored road and corresponding rectangular area position parameters of the preset rectangle in an image;
acquiring a coordinate perspective transformation coefficient according to the pixel coordinate and the pixel coordinate after perspective transformation;
generating a road aerial view according to the preset width of the monitored road and configuring the monitored target radar data of the monitored road on the aerial view;
acquiring position coordinates of a preset number of vehicle targets on the aerial view and acquiring pixel coordinates of the preset number of vehicle targets after perspective transformation through the radar sensor;
acquiring a position coordinate transformation parameter and a position coordinate transformation parameter according to the position coordinates of the preset number of vehicle targets, the pixel coordinates of the preset number of vehicle targets after perspective transformation and a preset perspective transformation area height value;
determining the position information of the vehicle target according to the coordinate perspective transformation coefficient, the position coordinate transformation parameter and the pixel coordinate of the vehicle target;
and matching the time between the camera data and the radar data of each target in the monitored road area according to the position information of the vehicle target and the position information of the vehicle target on the aerial view, which is acquired by the radar sensor.
Further, the step of obtaining a coordinate perspective transformation coefficient according to the pixel coordinate and the pixel coordinate after perspective transformation includes:
generating a first transformation matrix according to the pixel coordinates after perspective transformation corresponding to the preset number of position points in the preset rectangular area of the monitored road;
generating a second transformation matrix according to pixel coordinates corresponding to a preset number of position points in a preset rectangular area of the monitored road and the pixel coordinates after perspective transformation;
and acquiring a coordinate perspective transformation coefficient according to the first transformation matrix and the second transformation matrix.
Further, before the step of generating a road aerial view according to the preset width of the monitored road and configuring the monitored target radar data of the monitored road on the aerial view, the method further comprises:
generating an image after perspective transformation according to the coordinate perspective transformation coefficient;
judging whether each lane line in the image after perspective transformation is parallel and vertical to an x coordinate axis;
and if so, confirming that the coordinate perspective transformation coefficient meets the preset condition.
Further, before the step of determining the position information of the vehicle object according to the coordinate perspective transformation coefficient, the position coordinate transformation parameter, and the pixel coordinate of the vehicle object, the method further includes:
acquiring pixel coordinates of preset number of position points from the lane line according to the shape corresponding to the lane line;
generating and displaying the lane line on the aerial view according to the coordinate perspective transformation coefficient, the position coordinate transformation parameter and pixel coordinates of a preset number of position points on the lane line;
judging whether the lane line is matched with the initial lane line corresponding to the aerial view;
if the position coordinate transformation parameters are matched with the preset conditions, confirming that the position coordinate transformation parameters meet the preset conditions;
and if not, adjusting the position coordinate transformation parameters according to the difference between the lane line and the initial lane line corresponding to the aerial view.
Further, the step of obtaining the pixel coordinates of the preset number of position points from the lane line according to the shape corresponding to the lane line includes:
if the lane line is a straight line, acquiring pixel coordinates of two endpoints from the lane line;
if the lane line is a broken line, acquiring pixel coordinates of two end points and each broken point from the lane line;
if the lane line is an arc line, acquiring pixel coordinates of a preset number of position points according to the radian of the lane line, wherein the number of the preset number of position points is in direct proportion to the radian of the arc line.
In another aspect, the present invention provides a radar data and video data fusion system based on perspective transformation, the system comprising: the acquisition unit is used for acquiring pixel coordinates after perspective transformation according to pixel coordinates of preset number position points in a preset rectangular area of the monitored road and corresponding rectangular area position parameters of the preset rectangle in an image;
the acquisition unit is further used for acquiring a coordinate perspective transformation coefficient according to the pixel coordinate and the pixel coordinate after perspective transformation; generating a road aerial view according to the preset width of the monitored road and configuring the monitored target radar data of the monitored road on the aerial view; acquiring position coordinates of a preset number of vehicle targets on the aerial view and acquiring pixel coordinates of the preset number of vehicle targets after perspective transformation through the radar sensor; acquiring a position coordinate transformation parameter and a position coordinate transformation parameter according to the position coordinates of the preset number of vehicle targets, the pixel coordinates of the preset number of vehicle targets after perspective transformation and a preset perspective transformation area height value;
the determining unit determines the position information of the vehicle target according to the coordinate perspective transformation coefficient, the position coordinate transformation parameter and the pixel coordinate of the vehicle target;
and the matching unit is used for matching the time between the camera data and the radar data of each target in the monitored road area according to the position information of the vehicle target and the position information of the vehicle target on the aerial view, which is acquired by the radar sensor.
Further, the obtaining unit is specifically configured to generate a first transformation matrix according to the pixel coordinates after perspective transformation corresponding to a preset number of location points in a preset rectangular area of the monitored road; generating a second transformation matrix according to pixel coordinates corresponding to preset number position points in a preset rectangular area of the monitored road and pixel coordinates after perspective transformation; and acquiring a coordinate perspective transformation coefficient according to the first transformation matrix and the second transformation matrix.
Further, the system further comprises: a first verification unit; the first inspection unit is used for generating an image after perspective transformation according to the coordinate perspective transformation coefficient; judging whether each lane line in the image after perspective transformation is parallel and vertical to an x coordinate axis; and if so, confirming that the coordinate perspective transformation coefficient meets the preset condition.
Further, the system further comprises: a second inspection unit;
the second checking unit is used for acquiring pixel coordinates of position points with preset number from the lane line according to the shape corresponding to the lane line; generating and displaying the lane line on the aerial view according to the coordinate perspective transformation coefficient, the position coordinate transformation parameter and pixel coordinates of a preset number of position points on the lane line; judging whether the lane line is matched with the initial lane line corresponding to the aerial view; if the position coordinate transformation parameters are matched with the preset conditions, confirming that the position coordinate transformation parameters meet the preset conditions; and if not, adjusting the position coordinate transformation parameters according to the difference between the lane line and the initial lane line corresponding to the aerial view.
Further, the obtaining unit is specifically configured to obtain pixel coordinates of two endpoints from the lane line if the lane line is a straight line; if the lane line is a broken line, acquiring pixel coordinates of two end points and each broken point from the lane line; if the lane line is an arc line, acquiring pixel coordinates of a preset number of position points according to the radian of the lane line, wherein the number of the preset number of position points is in direct proportion to the radian of the arc line.
The invention provides a radar data and video data fusion method and system based on perspective transformation, which comprises the steps of firstly obtaining pixel coordinates after perspective transformation according to pixel coordinates of preset number position points in a preset rectangular area of a monitored road and corresponding rectangular area position parameters of the preset rectangle in an image, then obtaining coordinate perspective transformation coefficients according to the pixel coordinates and the pixel coordinates after perspective transformation, and obtaining position coordinate transformation parameters and position coordinate transformation parameters according to the position coordinates of preset number vehicle targets obtained by a radar sensor, the pixel coordinates after perspective transformation of the preset number vehicle targets and the preset perspective transformation area height values; determining the position information of the vehicle target according to the coordinate perspective transformation coefficient, the position coordinate transformation parameter and the pixel coordinate of the vehicle target; and finally, matching the time between the camera data and the radar data of each target in the monitored road area according to the position information of the vehicle target and the position information of the vehicle target on the aerial view, which is acquired by the radar sensor. The invention obtains the position coordinate transformation parameter by obtaining the coordinate perspective transformation coefficient of the monitored road area in advance, obtaining the position coordinates of the preset number of vehicle targets on the aerial view through the radar sensor and obtaining the pixel coordinates after the perspective transformation of the preset number of vehicle targets, when the vehicle targets exist in the image of the camera monitored area, the position information of the vehicle targets can be obtained in real time according to the pixel coordinates of the vehicle targets in the image, and the time between the camera data of each target of the monitored road area and the radar data can be matched according to the position information of the vehicle targets on the aerial view obtained by the radar sensor, thereby the data which is monitored by the camera and actually takes meters as a unit can be obtained in real time when the intersection monitoring management is carried out based on multiple sensors, and the accuracy and the efficiency when the intersection monitoring management is carried out based on the radar and the camera are improved, meanwhile, the fusion of camera data and radar data is realized.
Drawings
FIG. 1 is a flow chart of a perspective transformation-based radar data and video data fusion method provided by the invention;
fig. 2 is a schematic structural diagram of a radar data and video data fusion system based on perspective transformation according to the present invention.
Detailed Description
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
As shown in fig. 1, a method for fusing radar data and video data based on perspective transformation according to an embodiment of the present invention includes the following steps:
101. and acquiring pixel coordinates after perspective transformation according to pixel coordinates of preset number of position points in a preset rectangular area of the monitored road and corresponding rectangular area position parameters of the preset rectangle in the image.
The preset number may be 3, 4, 5, etc., and the embodiment of the present invention is not limited.
For the embodiment of the present invention, step 101 may specifically include: and acquiring the pixel coordinate after perspective transformation according to the pixel coordinate and the width and height of the rectangular area corresponding to the preset rectangle in the image.
Specifically, for example, a rectangle is constructed by counterclockwise pixel coordinates of four vertices, and then the pixel coordinates after perspective transformation are calculated according to the formulas Y ═ 1,1, w, w ] and X ═ 1, h, h,1, where X, Y is the pixel coordinates after perspective transformation, w is the width of the rectangular region corresponding to the preset rectangle in the image, and h is the height of the rectangular region corresponding to the preset rectangle in the image.
102. And acquiring a coordinate perspective transformation coefficient according to the pixel coordinate and the pixel coordinate after perspective transformation.
For the embodiment of the present invention, step 102 may specifically include: generating a first transformation matrix according to the pixel coordinates after perspective transformation corresponding to the preset number of position points in the preset rectangular area of the monitored road; generating a second transformation matrix according to pixel coordinates corresponding to a preset number of position points in a preset rectangular area of the monitored road and the pixel coordinates after perspective transformation; and acquiring a coordinate perspective transformation coefficient according to the first transformation matrix and the second transformation matrix.
Specifically, for example, first according to a formulaGenerating a first transformation matrix and then according to the formulaGenerating a second transformation matrix, and finally, according to the formula f ═ A - 1 B, calculating coordinate perspective transformation coefficients, wherein B is a first transformation matrix, A is a second transformation matrix, f is the coordinate perspective transformation coefficients, X1-X4 and Y1-Y4 are pixel coordinates of four points of a corresponding rectangular area of a preset rectangle in the image after perspective transformation, and X1-X4 and Y1-Y4 are pixel coordinates of the four points of the preset rectangle.
For the embodiment of the present invention, in order to further improve the accuracy of the coordinate perspective transformation coefficient, before performing step 103, the method further includes: generating an image after perspective transformation according to the coordinate perspective transformation coefficient; judging whether each lane line in the image after perspective transformation is parallel and vertical to an x coordinate axis; and if so, confirming that the coordinate perspective transformation coefficient meets a preset condition.
103. And generating a road aerial view according to the preset width of the monitored road and configuring the monitored target radar data of the monitored road on the aerial view.
The preset width of the monitored road can be obtained in advance according to the existing map tool.
104. And acquiring the position coordinates of a preset number of vehicle targets on the aerial view and acquiring the pixel coordinates of the preset number of vehicle targets after perspective transformation through the radar sensor.
The preset number of vehicles is usually two vehicles which cannot be collinear on a horizontal line or a vertical line under a camera or radar coordinate system.
105. And acquiring position coordinate transformation parameters according to the position coordinates of the preset number of vehicle targets, the pixel coordinates of the preset number of vehicle targets after perspective transformation and the preset perspective transformation area height value.
For the embodiment of the present invention, step 105 may specifically include: according to formula X m =k x X-b x And Y m =k y (-Y+h)+b y Acquiring position coordinate transformation parameters, wherein (X, Y) is pixel coordinate after perspective transformation, (X) m ,Y m ) The position coordinates of the preset number of vehicle targets are used, H is the height of a known perspective transformation area, and (k, b) are position coordinate transformation parameters, the physical meaning of k is the number of meters represented by each pixel, the physical meaning of b is the distance between a rectangular area selected by perspective transformation and an x-axis y-axis respectively, and the position coordinate transformation parameters can be calculated according to the position coordinates of the preset number of vehicle targets and the pixel coordinates after perspective transformation.
For the embodiment of the present invention, in order to further improve the accuracy of the position coordinate transformation parameter, before step 106, the method may further include: acquiring pixel coordinates of preset number of position points from the lane line according to the shape corresponding to the lane line; generating and displaying the lane line on the aerial view according to the coordinate perspective transformation coefficient, the position coordinate transformation parameter and pixel coordinates of a preset number of position points on the lane line; judging whether the lane line is matched with the initial lane line corresponding to the aerial view; if the position coordinate transformation parameters are matched with the preset conditions, confirming that the position coordinate transformation parameters meet the preset conditions; and if not, adjusting the position coordinate transformation parameters according to the difference between the lane line and the initial lane line corresponding to the aerial view.
For the embodiment of the present invention, the step of obtaining the pixel coordinates of the preset number of position points from the lane line according to the shape corresponding to the lane line may specifically include: if the lane line is a straight line, acquiring pixel coordinates of two endpoints from the lane line; if the lane line is a broken line, acquiring pixel coordinates of two end points and each broken point from the lane line; if the lane line is an arc line, acquiring pixel coordinates of a preset number of position points according to the radian of the lane line, wherein the number of the preset number of position points is in direct proportion to the radian of the arc line, namely the radian is larger, the number of the selected position points is larger, and therefore the acquisition accuracy of the lane line is further improved.
For the embodiment of the present invention, the step of generating and displaying the lane line on the image according to the coordinate perspective transformation coefficient, the position coordinate transformation parameter, and the pixel coordinates of the preset number of position points on the lane line may specifically include: and acquiring the actual position coordinates of the preset number of position points on the lane line according to the coordinate perspective transformation coefficient, the position coordinate transformation parameter and the pixel coordinates of the preset number of position points on the lane line, and connecting the position points according to the actual position coordinates of the preset number of position points on the lane line to generate a lane line.
106. And determining the position information of the vehicle target according to the coordinate perspective transformation coefficient, the position coordinate transformation parameter and the pixel coordinate of the vehicle target.
For the embodiment of the present invention, step 106 may specifically include: constructing a vehicle target transformation matrix according to the pixel coordinates of the vehicle target; generating pixel coordinates of the vehicle target after perspective transformation according to the vehicle target transformation matrix and the coordinate perspective transformation coefficient; and acquiring the position information of the vehicle target according to the pixel coordinates of the vehicle target after perspective transformation and the position coordinate transformation parameters.
Specifically, first, according to the formula c ═ x vehicle y vehicle 1]Constructing a vehicle object transformation matrix and then according to the formulaAndgenerating pixel coordinates of the vehicle target after perspective transformation, and finally, according to a formula Y', turning the vehicle into k y (-Y vehicle + H) + b v And X' car ═ k x X vehicle-b x And acquiring the position information of the vehicle target, wherein cut is c-fa, c is a vehicle target transformation matrix, X vehicles and Y vehicles are pixel coordinates of the vehicle target after perspective transformation, cut is space coordinates after perspective transformation, fa is a coordinate perspective transformation coefficient, and 1-3 respectively represent a first point to a third point of a preset rectangle.
107. And matching the time between the camera data and the radar data of each target in the monitored road area according to the position information of the vehicle target and the position information of the vehicle target on the aerial view, which is acquired by the radar sensor.
The embodiment of the invention provides a radar data and video data fusion method based on perspective transformation, which comprises the steps of firstly obtaining pixel coordinates after perspective transformation according to pixel coordinates of preset number position points in a preset rectangular area of a monitored road and corresponding rectangular area position parameters of the preset rectangle in an image, then obtaining a coordinate perspective transformation coefficient according to the pixel coordinates and the pixel coordinates after perspective transformation, and obtaining position coordinate transformation parameters and position coordinate transformation parameters according to the position coordinates of preset number vehicle targets obtained by a radar sensor, the pixel coordinates after perspective transformation of the preset number vehicle targets and the preset perspective transformation area height value; determining the position information of the vehicle target according to the coordinate perspective transformation coefficient, the position coordinate transformation parameter and the pixel coordinate of the vehicle target; and finally, matching the time between the camera data and the radar data of each target in the monitored road area according to the position information of the vehicle target and the position information of the vehicle target on the aerial view, which is acquired by the radar sensor. The invention obtains the position coordinate transformation parameter by obtaining the coordinate perspective transformation coefficient of the monitored road area in advance, obtaining the position coordinates of the preset number of vehicle targets on the aerial view through the radar sensor and obtaining the pixel coordinates after the perspective transformation of the preset number of vehicle targets, when the vehicle targets exist in the image of the camera monitored area, the position information of the vehicle targets can be obtained in real time according to the pixel coordinates of the vehicle targets in the image, and the time between the camera data of each target of the monitored road area and the radar data can be matched according to the position information of the vehicle targets on the aerial view obtained by the radar sensor, thereby the data which is monitored by the camera and actually takes meters as a unit can be obtained in real time when the intersection monitoring management is carried out based on multiple sensors, and the accuracy and the efficiency when the intersection monitoring management is carried out based on the radar and the camera are improved, meanwhile, the fusion of camera data and radar data is realized.
In order to implement the method provided by the embodiment of the present invention, an embodiment of the present invention provides a system for fusing radar data and video data based on perspective transformation, as shown in fig. 2, the system includes: the device comprises an acquisition unit 21, a determination unit 22, a matching unit 23, a first verification unit 24 and a second verification unit 25.
The obtaining unit 21 is configured to obtain pixel coordinates after perspective transformation according to pixel coordinates of a preset number of position points in a preset rectangular region of the monitored road and a corresponding rectangular region position parameter of the preset rectangle in an image.
Since the rectangular road represented by the top view after coordinate transformation is rectangular, but not other quadrangles, 4 points of a certain rectangle on the road are selected to perform actual calibration measurement.
The obtaining unit 21 is further configured to obtain a coordinate perspective transformation coefficient according to the pixel coordinate and the pixel coordinate after perspective transformation; generating a road aerial view according to the preset width of the monitored road and configuring the monitored target radar data of the monitored road on the aerial view; acquiring position coordinates of a preset number of vehicle targets on the aerial view and acquiring pixel coordinates of the preset number of vehicle targets after perspective transformation through the radar sensor; and acquiring a position coordinate transformation parameter and a position coordinate transformation parameter according to the position coordinates of the preset number of vehicle targets, the pixel coordinates of the preset number of vehicle targets after perspective transformation and a preset perspective transformation area height value.
A determining unit 22, configured to determine the position information of the vehicle object according to the coordinate perspective transformation coefficient, the position coordinate transformation parameter, and the pixel coordinate of the vehicle object.
The matching unit 23 is configured to match time between camera data and radar data of each target in the monitored road area according to the position information of the vehicle target and the position information of the vehicle target on the bird's eye view acquired by the radar sensor.
Further, the obtaining unit 21 is specifically configured to generate a first transformation matrix according to the pixel coordinates after perspective transformation corresponding to a preset number of location points in a preset rectangular area of the monitored road; generating a second transformation matrix according to pixel coordinates corresponding to preset number position points in a preset rectangular area of the monitored road and pixel coordinates after perspective transformation; and acquiring a coordinate perspective transformation coefficient according to the first transformation matrix and the second transformation matrix.
Further, the first checking unit 24 is configured to generate a perspective transformed image according to the coordinate perspective transformation coefficient; judging whether each lane line in the image after perspective transformation is parallel and vertical to an x coordinate axis; and if so, confirming that the coordinate perspective transformation coefficient meets the preset condition.
Further, the second checking unit 25 is configured to obtain pixel coordinates of a preset number of position points from the lane line according to a shape corresponding to the lane line; generating and displaying the lane line on the aerial view according to the coordinate perspective transformation coefficient, the position coordinate transformation parameter and pixel coordinates of a preset number of position points on the lane line; judging whether the lane line is matched with the initial lane line corresponding to the aerial view; if the position coordinate transformation parameters are matched with the preset conditions, confirming that the position coordinate transformation parameters meet the preset conditions; and if not, adjusting the position coordinate transformation parameters according to the difference between the lane line and the initial lane line corresponding to the aerial view.
Further, the obtaining unit 21 is specifically configured to obtain pixel coordinates of two endpoints from the lane line if the lane line is a straight line; if the lane line is a broken line, acquiring pixel coordinates of two end points and each broken point from the lane line; if the lane line is an arc line, acquiring pixel coordinates of a preset number of position points according to the radian of the lane line, wherein the number of the preset number of position points is in direct proportion to the radian of the arc line.
The invention provides a radar data and video data fusion system based on perspective transformation, which comprises the steps of firstly obtaining pixel coordinates after perspective transformation according to pixel coordinates of preset number of position points in a preset rectangular area of a monitored road and corresponding rectangular area position parameters of the preset rectangle in an image, then obtaining a coordinate perspective transformation coefficient according to the pixel coordinates and the pixel coordinates after perspective transformation, and obtaining position coordinate transformation parameters and position coordinate transformation parameters according to the position coordinates of preset number of vehicle targets obtained by a radar sensor, the pixel coordinates after perspective transformation of the preset number of vehicle targets and the preset perspective transformation area height value; determining the position information of the vehicle target according to the coordinate perspective transformation coefficient, the position coordinate transformation parameter and the pixel coordinate of the vehicle target; and finally, matching the time between the camera data and the radar data of each target in the monitored road area according to the position information of the vehicle target and the position information of the vehicle target on the aerial view, which is acquired by the radar sensor. The invention obtains the coordinate perspective transformation coefficient of the monitored road area in advance, obtains the position coordinates of the preset number of vehicle targets on the aerial view through the radar sensor, obtains the pixel coordinates of the preset number of vehicle targets after perspective transformation, obtains the position coordinate transformation parameters, can obtain the position information of the vehicle targets in real time according to the pixel coordinates of the vehicle targets in the image when the image of the camera monitored area has the vehicle targets, and matches the time between the camera data of each target of the monitored road area and the radar data according to the position information of the vehicle targets on the aerial view obtained by the radar sensor, thereby obtaining the data which is monitored by the camera in real time and takes the meter as the unit when crossing monitoring and managing are carried out based on multiple sensors, and improving the accuracy and the efficiency when crossing monitoring and managing are carried out based on the radar and the camera, meanwhile, the fusion of camera data and radar data is realized.
It should be understood that the specific order or hierarchy of steps in the processes disclosed is an example of exemplary approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged without departing from the scope of the present disclosure. The accompanying method claims present elements of the various steps in a sample order, and are not intended to be limited to the specific order or hierarchy presented.
In the foregoing detailed description, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments of the subject matter require more features than are expressly recited in each claim. Rather, as the following claims reflect, invention lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby expressly incorporated into the detailed description, with each claim standing on its own as a separate preferred embodiment of the invention.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. To those skilled in the art; various modifications to these embodiments will be readily apparent, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
What has been described above includes examples of one or more embodiments. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the aforementioned embodiments, but one of ordinary skill in the art may recognize that many further combinations and permutations of various embodiments are possible. Accordingly, the embodiments described herein are intended to embrace all such alterations, modifications and variations that fall within the scope of the appended claims. Furthermore, to the extent that the term "includes" is used in either the detailed description or the claims, such term is intended to be inclusive in a manner similar to the term "comprising" as "comprising" is interpreted when employed as a transitional word in a claim. Furthermore, any use of the term "or" in the specification of the claims is intended to mean a "non-exclusive or".
Those of skill in the art will also appreciate that the various illustrative logical blocks, elements, and steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate the interchangeability of hardware and software, various illustrative components, elements, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design requirements of the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present embodiments.
The various illustrative logical blocks, or elements, described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor, an Application Specific Integrated Circuit (ASIC), a field programmable gate array or other programmable logic system, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but, in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing systems, e.g., a digital signal processor and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a digital signal processor core, or any other similar configuration.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. For example, a storage medium may be coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC, which may be located in a user terminal. In the alternative, the processor and the storage medium may reside in different components in a user terminal.
In one or more exemplary designs, the functions described above in connection with the embodiments of the invention may be implemented in hardware, software, firmware, or any combination of the three. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media that facilitate transfer of a computer program from one place to another. Storage media may be any available media that can be accessed by a general purpose or special purpose computer. For example, such computer-readable media can include, but is not limited to, RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage systems, or any other medium which can be used to carry or store program code in the form of instructions or data structures and which can be read by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Additionally, any connection is properly termed a computer-readable medium, and, thus, is included if the software is transmitted from a website, server, or other remote source via a coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL), or wirelessly, e.g., infrared, radio, and microwave. Such discs (disk) and disks (disc) include compact disks, laser disks, optical disks, DVDs, floppy disks and blu-ray disks where disks usually reproduce data magnetically, while disks usually reproduce data optically with lasers. Combinations of the above may also be included in the computer-readable medium.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (10)
1. A method for fusion of radar data and video data based on perspective transformation, the method comprising:
acquiring pixel coordinates after perspective transformation according to pixel coordinates of preset number position points in a preset rectangular area of the monitored road and corresponding rectangular area position parameters of the preset rectangle in an image;
acquiring a coordinate perspective transformation coefficient according to the pixel coordinate and the pixel coordinate after perspective transformation;
generating a road aerial view according to the preset width of the monitored road and configuring the monitored target radar data of the monitored road on the aerial view;
acquiring position coordinates of a preset number of vehicle targets on the aerial view and acquiring pixel coordinates of the preset number of vehicle targets after perspective transformation through the radar sensor;
acquiring position coordinate transformation parameters according to the position coordinates of the preset number of vehicle targets, the pixel coordinates of the preset number of vehicle targets after perspective transformation and the preset perspective transformation area height value;
determining the position information of the vehicle target according to the coordinate perspective transformation coefficient, the position coordinate transformation parameter and the pixel coordinate of the vehicle target;
and matching the time between the camera data and the radar data of each target in the monitored road area according to the position information of the vehicle target and the position information of the vehicle target on the aerial view, which is acquired by the radar sensor.
2. The method of claim 1, wherein the step of obtaining the perspective transformation coefficients according to the pixel coordinates and the perspective transformed pixel coordinates comprises:
generating a first transformation matrix according to the pixel coordinates after perspective transformation corresponding to the preset number of position points in the preset rectangular area of the monitored road;
generating a second transformation matrix according to pixel coordinates corresponding to preset number position points in a preset rectangular area of the monitored road and pixel coordinates after perspective transformation;
and acquiring a coordinate perspective transformation coefficient according to the first transformation matrix and the second transformation matrix.
3. The method of claim 1 or 2, wherein the step of generating a road bird's-eye view according to the preset width of the monitored road and arranging the monitored target radar data of the monitored road on the bird's-eye view is preceded by the method of fusing radar data and video data based on perspective transformation, the method further comprising:
generating an image after perspective transformation according to the coordinate perspective transformation coefficient;
judging whether each lane line in the image after perspective transformation is parallel and vertical to an x coordinate axis;
and if so, confirming that the coordinate perspective transformation coefficient meets the preset condition.
4. The method of claim 1, wherein the step of determining the position information of the vehicle object according to the coordinate perspective transformation coefficient, the position coordinate transformation parameter, and the pixel coordinate of the vehicle object is preceded by the step of determining the position information of the vehicle object according to the coordinate perspective transformation coefficient, the position coordinate transformation parameter, and the pixel coordinate of the vehicle object, and the method further comprises:
acquiring pixel coordinates of preset number of position points from the lane line according to the shape corresponding to the lane line;
generating and displaying the lane line on the aerial view according to the coordinate perspective transformation coefficient, the position coordinate transformation parameter and pixel coordinates of a preset number of position points on the lane line;
judging whether the lane line is matched with the initial lane line corresponding to the aerial view;
if the position coordinate transformation parameters are matched with the preset conditions, confirming that the position coordinate transformation parameters meet the preset conditions;
and if not, adjusting the position coordinate transformation parameters according to the difference between the lane line and the initial lane line corresponding to the aerial view.
5. The method of claim 4, wherein the step of obtaining the pixel coordinates of a preset number of position points from the lane line according to the corresponding shape of the lane line comprises:
if the lane line is a straight line, acquiring pixel coordinates of two endpoints from the lane line;
if the lane line is a broken line, acquiring pixel coordinates of two end points and each broken point from the lane line;
if the lane line is an arc line, acquiring pixel coordinates of a preset number of position points according to the radian of the lane line, wherein the number of the preset number of position points is in direct proportion to the radian of the arc line.
6. A perspective transformation based radar data and video data fusion system, the system comprising:
the acquisition unit is used for acquiring pixel coordinates after perspective transformation according to pixel coordinates of preset number position points in a preset rectangular area of the monitored road and corresponding rectangular area position parameters of the preset rectangle in an image;
the acquisition unit is further used for acquiring a coordinate perspective transformation coefficient according to the pixel coordinate and the pixel coordinate after perspective transformation; generating a road aerial view according to the preset width of the monitored road and configuring the monitored target radar data of the monitored road on the aerial view; acquiring position coordinates of a preset number of vehicle targets on the aerial view and acquiring pixel coordinates of the preset number of vehicle targets after perspective transformation through the radar sensor; acquiring a position coordinate transformation parameter and a position coordinate transformation parameter according to the position coordinates of the preset number of vehicle targets, the pixel coordinates of the preset number of vehicle targets after perspective transformation and a preset perspective transformation area height value;
the determining unit is used for determining the position information of the vehicle target according to the coordinate perspective transformation coefficient, the position coordinate transformation parameter and the pixel coordinate of the vehicle target;
and the matching unit is used for matching the time between the camera data and the radar data of each target in the monitored road area according to the position information of the vehicle target and the position information of the vehicle target on the aerial view, which is acquired by the radar sensor.
7. The perspective transformation based radar data and video data fusion system of claim 6,
the acquisition unit is specifically used for generating a first transformation matrix according to the pixel coordinates after perspective transformation corresponding to the preset number of position points in the preset rectangular area of the monitored road; generating a second transformation matrix according to pixel coordinates corresponding to preset number position points in a preset rectangular area of the monitored road and pixel coordinates after perspective transformation; and acquiring a coordinate perspective transformation coefficient according to the first transformation matrix and the second transformation matrix.
8. The perspective transformation based radar data and video data fusion system of claim 6, further comprising: a first verification unit; the first inspection unit is used for generating an image after perspective transformation according to the coordinate perspective transformation coefficient; judging whether each lane line in the image after perspective transformation is parallel and vertical to an x coordinate axis; and if so, confirming that the coordinate perspective transformation coefficient meets the preset condition.
9. The perspective transformation based radar data and video data fusion system of claim 6, further comprising: a second inspection unit;
the second checking unit is used for acquiring pixel coordinates of position points with preset number from the lane line according to the shape corresponding to the lane line; generating and displaying the lane lines on the aerial view according to the coordinate perspective transformation coefficients, the position coordinate transformation parameters and pixel coordinates of a preset number of position points on the lane lines; judging whether the lane line is matched with the corresponding initial lane line on the aerial view; if the position coordinate transformation parameters are matched with the preset conditions, confirming that the position coordinate transformation parameters meet the preset conditions; and if not, adjusting the position coordinate transformation parameters according to the difference between the lane lines and the corresponding initial lane lines on the aerial view.
10. The perspective transformation based radar data and video data fusion system of claim 6,
the acquiring unit is specifically configured to acquire pixel coordinates of two endpoints from the lane line if the lane line is a straight line; if the lane line is a broken line, acquiring pixel coordinates of two end points and each broken point from the lane line; if the lane line is an arc line, acquiring pixel coordinates of a preset number of position points according to the radian of the lane line, wherein the number of the preset number of position points is in direct proportion to the radian of the arc line.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210510405.4A CN115015909A (en) | 2022-05-11 | 2022-05-11 | Radar data and video data fusion method and system based on perspective transformation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210510405.4A CN115015909A (en) | 2022-05-11 | 2022-05-11 | Radar data and video data fusion method and system based on perspective transformation |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115015909A true CN115015909A (en) | 2022-09-06 |
Family
ID=83069821
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210510405.4A Pending CN115015909A (en) | 2022-05-11 | 2022-05-11 | Radar data and video data fusion method and system based on perspective transformation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115015909A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117456013A (en) * | 2023-12-22 | 2024-01-26 | 江苏集萃深度感知技术研究所有限公司 | Automatic calibration method of radar integrated machine based on countermeasure generation network |
-
2022
- 2022-05-11 CN CN202210510405.4A patent/CN115015909A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117456013A (en) * | 2023-12-22 | 2024-01-26 | 江苏集萃深度感知技术研究所有限公司 | Automatic calibration method of radar integrated machine based on countermeasure generation network |
CN117456013B (en) * | 2023-12-22 | 2024-03-05 | 江苏集萃深度感知技术研究所有限公司 | Automatic calibration method of radar integrated machine based on countermeasure generation network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7112993B2 (en) | Laser Radar Internal Parameter Accuracy Verification Method and Its Apparatus, Equipment and Medium | |
CN110850872A (en) | Robot inspection method and device, computer readable storage medium and robot | |
CN110146869A (en) | Determine method, apparatus, electronic equipment and the storage medium of coordinate system conversion parameter | |
CN113205691B (en) | Method and device for identifying vehicle position | |
CN111582256A (en) | Parking management method and device based on radar and visual information | |
CN112735253B (en) | Traffic light automatic labeling method and computer equipment | |
CN113191329A (en) | Vehicle berth matching method and system based on monocular vision picture | |
CN115015909A (en) | Radar data and video data fusion method and system based on perspective transformation | |
WO2021150689A1 (en) | System and methods for calibrating cameras with a fixed focal point | |
CN114022417A (en) | Illegal parking management method and system based on vehicle directional bounding box | |
CN113052141A (en) | Method and device for detecting parking position of vehicle | |
CN109637148A (en) | Vehicle-mounted whistle monitoring system, method, storage medium and equipment | |
CN111693998A (en) | Method and device for detecting vehicle position based on radar and image data | |
CN116386373A (en) | Vehicle positioning method and device, storage medium and electronic equipment | |
CN116224255A (en) | Camera detection data calibration method and system based on radar data | |
CN115249344A (en) | Lane line generation display method and system based on road marking point | |
CN115168330A (en) | Data processing method, device, server and storage medium | |
CN114913490A (en) | Method and system for determining vehicle target position based on road calibration point | |
CN114913489A (en) | Method and system for determining vehicle target position based on coordinate transformation | |
KR102202705B1 (en) | Method for determining a position and/or orientation of a sensor | |
CN111292382B (en) | Method and device for calibrating vehicle-mounted image acquisition equipment, electronic equipment and medium | |
CN111372051B (en) | Multi-camera linkage blind area detection method and device and electronic equipment | |
CN114051627A (en) | Camera calibration method | |
CN115407333A (en) | Camera detection position generation method and system based on radar and surveying and mapping map assistance | |
US20230410353A1 (en) | Information processing method and related device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |