CN117456013A - Automatic calibration method of radar integrated machine based on countermeasure generation network - Google Patents

Automatic calibration method of radar integrated machine based on countermeasure generation network Download PDF

Info

Publication number
CN117456013A
CN117456013A CN202311775975.7A CN202311775975A CN117456013A CN 117456013 A CN117456013 A CN 117456013A CN 202311775975 A CN202311775975 A CN 202311775975A CN 117456013 A CN117456013 A CN 117456013A
Authority
CN
China
Prior art keywords
target
image
matrix
result
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311775975.7A
Other languages
Chinese (zh)
Other versions
CN117456013B (en
Inventor
汪宗洋
金奇
岳玉涛
顾炎飚
孙令萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Jicui Depth Perception Technology Research Institute Co ltd
Original Assignee
Jiangsu Jicui Depth Perception Technology Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Jicui Depth Perception Technology Research Institute Co ltd filed Critical Jiangsu Jicui Depth Perception Technology Research Institute Co ltd
Priority to CN202311775975.7A priority Critical patent/CN117456013B/en
Publication of CN117456013A publication Critical patent/CN117456013A/en
Application granted granted Critical
Publication of CN117456013B publication Critical patent/CN117456013B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/40Means for monitoring or calibrating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/094Adversarial learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of road traffic, and particularly discloses an automatic calibration method of a radar integrated machine based on an countermeasure generation network, which comprises the following steps: acquiring target image information acquired by a camera device and target coordinate information acquired by a radar device; performing image conversion on the target image information according to the countermeasure generation network to obtain a bird's eye view; performing target matching according to the aerial view and the target coordinate information to obtain a plurality of first target matching point pairs, and performing target matching according to the aerial view and the target image information to obtain a plurality of second target matching point pairs; performing translation scaling calculation according to the first target matching point pair to obtain a translation scaling matrix, and performing image projection calculation according to the second target matching point pair to obtain an image projection matrix; and calibrating according to the translation scaling matrix and the image projection matrix to obtain a calibration result. The automatic calibration method of the radar integrated machine based on the countermeasure generation network improves Lei Shibiao fixed matching efficiency and accuracy.

Description

Automatic calibration method of radar integrated machine based on countermeasure generation network
Technical Field
The invention relates to the technical field of road traffic, in particular to an automatic calibration method of a radar integrated machine based on an countermeasure generation network.
Background
In the technical field of road traffic, when traffic management platforms need to track vehicles running on roads, etc., a radar fusion technology is generally required. The radar fusion integrated machine is a road side edge device for completing statistics or identification of traffic flow, traffic state, traffic events and the like through sensor input of cameras, millimeter wave radars and the like, and the identification result can be obtained only by calibrating when image analysis is carried out.
The existing calibration mode comprises manual calibration and automatic calibration, wherein the manual calibration is performed by a user, the difficulty is high for the user without a certain foundation, the calibration precision is low, the scheme flow is complex, and the calibration efficiency is low for each user; in addition, the automatic calibration is to select matching points by an algorithm, record point pairs when the road surface vehicle layout is clear, collect enough point pairs meeting the requirements, calculate and obtain a projection matrix, and the method can improve the efficiency, but is uncontrollable in time, and the matched point pairs are likely to be not found for a long time in a road section with rare vehicles or complex vehicle conditions.
Therefore, how to improve the matching efficiency and matching accuracy of the point pairs in the automatic calibration process is a technical problem to be solved by those skilled in the art.
Disclosure of Invention
The invention provides an automatic calibration method of a radar integrated machine based on an countermeasure generation network, which solves the problems of low Lei Shibiao definite matching efficiency and low accuracy in the related technology.
As one aspect of the present invention, there is provided an automatic calibration method for a radar integrated machine based on an countermeasure generation network, including:
acquiring target image information acquired by a camera device and target coordinate information acquired by a radar device;
performing image conversion on the target image information according to the countermeasure generation network to obtain a bird's eye view;
performing target matching according to the aerial view and the target coordinate information to obtain a plurality of first target matching point pairs, and performing target matching according to the aerial view and the target image information to obtain a plurality of second target matching point pairs;
performing translation scaling calculation according to the first target matching point pair to obtain a translation scaling matrix, and performing image projection calculation according to the second target matching point pair to obtain an image projection matrix;
calibrating according to the translation scaling matrix and the image projection matrix to obtain a calibration result;
performing translation scaling calculation according to the first target matching point pair to obtain a translation scaling matrix, wherein the translation scaling matrix is obtained after performing image scaling and image translation on the first target matching point pair;
performing image projection calculation according to the second target matching point pair to obtain an image projection matrix, wherein the image projection matrix is obtained by combining perspective division after performing projection transformation on target image information in the second target matching point pair;
and calibrating according to the translation scaling matrix and the image projection matrix, wherein the obtaining of the calibration result comprises the coordinate system conversion between the result obtained after the coordinate system conversion according to the translation scaling matrix and the image projection matrix, and the calibration result is obtained.
Further, performing object matching according to the aerial view and the object coordinate information to obtain a plurality of first object matching point pairs, including:
performing target detection on the aerial view according to a target detection algorithm to obtain a target detection result;
performing target shape matching according to the target detection result and the target coordinate information;
and obtaining a plurality of first target matching point pairs according to the target shape matching result.
Further, performing target shape matching with the target coordinate information according to the target detection result, including:
determining the target to be matched according to the target coordinate information;
and carrying out target shape random matching in the target detection result according to the target to be matched to obtain a target shape matching result.
Further, performing target shape random matching in the target detection result according to the target to be matched to obtain a target shape matching result, including:
acquiring a position set to be matched in the target detection result according to the target to be matched;
collecting circulation coordinate values aiming at the positions to be matched, and adjusting the coordinates of the positions to be matched according to the parameter information of each position to be matched in the circulation result;
and calibrating the coordinates of the adjusted position to be matched to obtain a target shape matching result.
Further, performing object matching according to the aerial view and the object image information to obtain a plurality of second object matching point pairs, including:
respectively carrying out target detection on the target image information and the aerial view according to a target detection algorithm to obtain a target detection result;
when the target detection result is that a target exists, performing target distribution shape matching according to the aerial view and the target image information, and obtaining a plurality of shape matching point pairs according to the target distribution shape matching result;
the plurality of shape matching point pairs are taken as second target matching point pairs.
Further, performing image projection calculation according to the second target matching point pair to obtain an image projection matrix, including:
performing projective transformation on the target image information in the second target matching point pair to obtain a perspective transformation matrix;
and cutting the regular observers according to the perspective transformation matrix, and performing perspective division on the result obtained after cutting the regular observers to obtain an image projection matrix.
Further, performing a panning scaling calculation according to the first target matching point pair to obtain a panning scaling matrix, including:
image scaling is carried out on the first target matching point pair according to a scaling factor, and a scaling matrix is obtained;
performing image translation on the first target matching point pair according to the offset to obtain a translation matrix;
and obtaining a translation scaling matrix according to the scaling matrix and the translation matrix.
Further, calibrating according to the translation scaling matrix and the image projection matrix to obtain a calibration result, including:
performing coordinate system conversion according to the translation scaling matrix and the target coordinate information to obtain a camera coordinate system conversion result;
performing image coordinate system conversion according to the camera coordinate system conversion result to obtain an image coordinate system conversion result;
and performing pixel coordinate system conversion according to the image coordinate system conversion result and the image projection matrix to obtain a pixel coordinate system conversion result.
Further, performing image conversion on the target image information according to the countermeasure generation network to obtain a bird's eye view, including:
training a challenge-generating network;
and inputting the target image information into the countermeasure generation network to perform image conversion, and obtaining a bird's eye view.
Further, training the challenge-generating network, comprising:
training a GAN network according to a pair of target image information dataset and a bird's eye view dataset, the GAN network comprising at least a generator and a discriminator, the generator and the discriminator being capable of antagonizing and training with respect to each other;
and setting a loss function of the GAN network, and performing constraint training on the GAN network according to the loss function to obtain an countermeasure generation network, wherein the loss function is used for constraining the similarity of the generator and the discriminator.
According to the automatic calibration method of the radar integrated machine based on the countermeasure generation network, target image information is converted into the aerial view through the countermeasure generation network, target matching is conducted through the aerial view and the target coordinate information, translation scaling processing is conducted on a matched result, target matching is conducted through the aerial view and the target image information, image projection processing is conducted on a matched result, and finally calibration is conducted according to the translation scaling processing result and the image projection processing result, so that a calibration result is obtained. According to the automatic calibration method of the radar integrated machine based on the countermeasure generation network, the image information is converted into the aerial view through the countermeasure generation network, so that the matching efficiency of reliable point pairs can be effectively improved, and the vehicle can be separated through the converted aerial view, so that the matching accuracy of the image and radar coordinates is improved to a certain extent.
Drawings
The accompanying drawings are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, illustrate the invention and together with the description serve to explain, without limitation, the invention.
Fig. 1 is a flowchart of an automatic calibration method of a radar integrated machine based on an countermeasure generation network.
Fig. 2 is a flowchart of obtaining a first target matching point pair according to the present invention.
Fig. 3 is a schematic diagram of conversion from target image information to a bird's eye view according to the present invention.
Fig. 4 is a schematic diagram of the mutual antagonism and training of the generator and discriminator provided by the present invention.
Fig. 5 is a flowchart of obtaining a pan scaling matrix according to the present invention.
FIG. 6 is a flow chart of obtaining calibration results provided by the present invention.
Fig. 7a is a schematic diagram of target image information and a bird's eye view according to the present invention.
Fig. 7b is another set of schematic diagrams of the target image information and the bird's eye view provided by the present invention.
Fig. 8 is a view of the target detection result in the target image information and the bird's eye view according to the present invention.
Fig. 9 is a flowchart of a specific implementation manner of an automatic calibration method of a radar integrated machine based on an countermeasure generation network.
FIG. 10 is a flow chart of multi-objective distribution shape matching provided by the present invention.
Detailed Description
It should be noted that, without conflict, the embodiments of the present invention and features of the embodiments may be combined with each other. The invention will be described in detail below with reference to the drawings in connection with embodiments.
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate in order to describe the embodiments of the invention herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In this embodiment, a method for automatically calibrating a radar integrated machine based on an countermeasure generation network is provided, and fig. 1 is a flowchart of the method for automatically calibrating a radar integrated machine based on an countermeasure generation network, as shown in fig. 1, including:
s100, acquiring target image information acquired by a camera device and target coordinate information acquired by a radar device;
in the embodiment of the invention, the image information of the targets is acquired by the imaging device arranged on the road side or the vehicle, and the coordinate information of the targets is acquired by the radar device, wherein the targets can be vehicles.
S200, performing image conversion on the target image information according to the countermeasure generation network to obtain a bird' S eye view;
in the embodiment of the invention, the acquired image information of the vehicle can be subjected to image conversion through the countermeasure generation network to obtain the aerial view, and the method can be specifically understood to be that the acquired image information of the vehicle is separated from the vehicle target, the vehicle body size and the vehicle position are reserved, and the target detection is performed under the aerial view, so that a more accurate result with a definite position relationship can be obtained.
S300, performing target matching according to the aerial view and the target coordinate information to obtain a plurality of first target matching point pairs, and performing target matching according to the aerial view and the target image information to obtain a plurality of second target matching point pairs;
in the embodiment of the invention, the aerial view and the target coordinate information acquired by the radar device are subjected to multi-target distribution shape matching, and the aerial view and the target image information acquired by the image pickup device are subjected to multi-target distribution shape matching.
S400, performing translation scaling calculation according to the first target matching point pair to obtain a translation scaling matrix, and performing image projection calculation according to the second target matching point pair to obtain an image projection matrix;
in the embodiment of the invention, the translation scaling calculation is performed on the result of the matching of the aerial view and the target coordinate information acquired by the radar device, the translation scaling matrix can be obtained, and the image projection calculation is performed on the result of the matching of the aerial view and the target image information acquired by the image pickup device, so that the image projection matrix is obtained.
It should be appreciated that when the bird's eye view and the target image information acquired by the image capturing device are matched in shape, a sufficient number of target matching point pairs are collected for image projection matrix calculation. In the process, if no object exists for a long time, the image projection matrix calculation can be performed by forming point pairs through positions of the lane lines and surrounding markers. In the embodiment of the invention, the calculation speed can be effectively improved through the calculation of the image projection matrix, so that the real-time calibration result of the lightning all-in-one machine is ensured.
When the radar has targets, the aerial view and radar targets generated by the synchronous countermeasure generation network are collected, the aerial view and the radar targets are both in the aerial view coordinate system, the difference is a translation and a scaling, the target shape is matched, the frames with high matching degree are reserved, and after enough point pairs are collected, the calculation of a translation scaling matrix is performed to average and reduce errors.
S500, calibrating according to the translation scaling matrix and the image projection matrix to obtain a calibration result.
In the embodiment of the invention, the calibration matrix can be obtained by calculating the translation scaling matrix and the image projection matrix, and the calibration result is obtained.
According to the automatic calibration method of the radar integrated machine based on the countermeasure generation network, provided by the embodiment of the invention, target image information is converted into the aerial view through the countermeasure generation network, target matching is carried out through the aerial view in combination with target coordinate information, translation scaling processing is carried out on a matched result, target matching is carried out through the aerial view in combination with target image information, image projection processing is carried out on a matched result, and finally calibration is carried out according to the result of the translation scaling processing and the result of the image projection processing, so that a calibration result is obtained. According to the automatic calibration method of the radar integrated machine based on the countermeasure generation network, the image information is converted into the aerial view through the countermeasure generation network, so that the matching efficiency of reliable point pairs can be effectively improved, and the vehicle can be separated through the converted aerial view, so that the matching accuracy of the image and radar coordinates is improved to a certain extent.
In order to achieve matching between a bird's eye view and target coordinate information, specifically, as shown in fig. 2, target matching is performed according to the bird's eye view and the target coordinate information, to obtain a plurality of first target matching point pairs, including:
s310, performing target detection on the aerial view according to a target detection algorithm to obtain a target detection result;
s320, performing target shape matching with the target coordinate information according to the target detection result;
s330, a plurality of first target matching point pairs are obtained according to the target shape matching result.
In the embodiment of the invention, the target in the aerial view can be detected by a target detection algorithm, and the target detection algorithm can be specifically a yolo-series target detection algorithm. After detecting a vehicle target in the aerial view aiming at a target detection algorithm, performing target shape matching on a target detection result and target coordinate information acquired by a radar device, and further obtaining a plurality of first target matching point pairs.
Further specifically, performing target shape matching with the target coordinate information according to the target detection result includes:
determining the target to be matched according to the target coordinate information;
and carrying out target shape random matching in the target detection result according to the target to be matched to obtain a target shape matching result.
It should be understood that in the embodiment of the present invention, the target to be matched can be determined according to the target coordinate information, and then the target to be matched is randomly matched in the target detection result according to the target to be matched, so that the matching result can be obtained.
For example, there are three vehicles on the road, the vehicles have the shape of a connecting line in the road, and after the camera generates a bird's eye view, there are also the shapes of the three vehicles in the image; three vehicles are arranged in the radar coordinates, and the positions of the three vehicles are found in the aerial view of the camera, so that the target shape matching can be realized.
It should be understood that when matching is performed, for example, there are a plurality of vehicles on a road, such as three vehicles, one in front of the other, and the shape of the vehicle is similar to that of a triangle, when automatic matching is performed by an algorithm, two targets of each frame are randomly selected for random matching, and points which cannot be matched are filtered out by the position relation.
In the embodiment of the invention, the random matching of the algorithm can be realized by adopting a Hungary matching algorithm.
Further, performing target shape random matching in the target detection result according to the target to be matched to obtain a target shape matching result, including:
acquiring a position set to be matched in the target detection result according to the target to be matched;
collecting circulation coordinate values aiming at the positions to be matched, and adjusting the coordinates of the positions to be matched according to the parameter information of each position to be matched in the circulation result;
and calibrating the coordinates of the adjusted position to be matched to obtain a target shape matching result.
In the embodiment of the invention, as shown in fig. 10, a set of matching positions of vehicles in a target detection image can be obtained by loading a template image and the target detection image and applying an OpenCV template matching function, and the coordinate values are cycled, parameter information of each vehicle is packaged into a tuple in a cycle, and the coordinate index of the tuple is adjusted to generate a rectangular frame to mark the specific position of the matching image.
Specifically, performing object matching according to the aerial view and the object image information to obtain a plurality of second object matching point pairs, including:
respectively carrying out target detection on the target image information and the aerial view according to a target detection algorithm to obtain a target detection result;
when the target detection result is that a target exists, performing target distribution shape matching according to the aerial view and the target image information, and obtaining a plurality of shape matching point pairs according to the target distribution shape matching result;
the plurality of shape matching point pairs are taken as second target matching point pairs.
It should be understood that in the embodiment of the present invention, the target image information and the bird's eye view are also detected according to the target detection algorithm, where the target detection algorithm may still be implemented by using the yolo series of target detection algorithms. The target image information and the target detection result of the bird's eye view are shown in fig. 8.
Further specifically, performing image projection calculation according to the second target matching point pair to obtain an image projection matrix, including:
performing projective transformation on the target image information in the second target matching point pair to obtain a perspective transformation matrix;
and cutting the regular observers according to the perspective transformation matrix, and performing perspective division on the result obtained after cutting the regular observers to obtain an image projection matrix.
Here, the image conversion of the target image information according to the countermeasure generation network to obtain the bird's eye view includes:
training a challenge-generating network;
and inputting the target image information into the countermeasure generation network to perform image conversion, and obtaining a bird's eye view.
Further specifically, training the challenge-generating network comprises:
training a GAN network according to a pair of target image information dataset and a bird's eye view dataset, the GAN network comprising at least a generator and a discriminator, the generator and the discriminator being capable of antagonizing and training with respect to each other;
and setting a loss function of the GAN network, and performing constraint training on the GAN network according to the loss function to obtain an countermeasure generation network, wherein the loss function is used for constraining the similarity of the generator and the discriminator.
In the embodiment of the present invention, the countermeasure generation network is implemented by using CycleGAN as an example, and it should be understood that a neural network that can implement both the target image information and the change of the bird's eye view may be applied here.
The main purpose of CycleGAN is to achieve adaptive conversion of distribution, where the task is to convert the target image information into a bird's-eye view, and there are now two datasets, a target image dataset and a corresponding bird's-eye view dataset, where X represents the distribution of the target image dataset and Y represents the distribution of the bird's-eye view dataset, as shown in fig. 7a and 7b, fig. 7a is a target image on the left side, fig. 7a is a bird's-eye view on the right side, and similarly fig. 7b is a target image on the left side, and fig. 7b is a bird's-eye view on the right side. Here, by training one generator G, the input target image X is made to pass through the generator G to obtain a bird's eye view Y', i.e., G (X) =y ', x∈x, and the bird's eye view Y 'follows the distribution of Y as much as possible (i.e., more resembles a road bird's eye view), while one generator F is trained, so that the input bird's eye view Y from the distribution of Y passes through the generator F to obtain one target image X', i.e., F (Y) =x ', y∈y, and the target image X' follows the distribution of X as much as possible, so that two discriminators need to be trainedAnd->,/>Scoring X and X ', i.e. the higher the score that fits the X distribution (X' more resembles a target image), the more ∈>Scoring Y and Y ' is the higher the score that more closely matches the Y distribution (Y ' more resembles a bird's eye view). The generator and the discriminator are mutually opposed and trained, and the generator needs to struggle with the generated graph to cheat the discriminator, and the discriminator trains continuously so that the discriminator is not easy to cheat. In particular, a transition diagram as shown in fig. 3.
As shown in fig. 4, a constraint is added to the Cycle gan (Cycle-Consistent Generative Adversarial Networks) and it is desirable that X ' after X passes through G and F and X ' at the beginning are as similar as possible, i.e., F (G (X))=x, so that the correspondence of the road object (i.e., the content of the vehicle road and the content of the original image in the bird's eye view are ensured to be as close as possible in the embodiment of the present invention), instead of the model randomly picking a graph from Y.
In the embodiment of the present invention, the total Loss function Loss of CycleGAN is composed of two parts:
wherein,representing a countering loss for ensuring that the generated picture conforms to the distribution of the corresponding dataset (e.g. bird's eye view); />A loop consistency loss is represented for ensuring the degree of information correlation of the original picture and the real picture.
=/>G,/>,X,Y/>+/>F,/>,X,Y/>=/>++/>+/>
=/>
Wherein,G,/>,X,Y/>reflecting the passage of x through G to y in cycleGAN, the purpose of G is to minimize this countermeasures loss, < >>The objective of (2) is to maximize this countermeasures against losses; />F,/>,X,Y/>Reflecting that y in CycleGAN goes from F to x, F is the same as G,/I>Purpose is->
In an embodiment of the present invention, in the present invention,the loss of the training generator and the loss of the training discriminator can be divided:
when training the generator, fixAnd->Adjusting G and F parameters, the aim of training being to minimize the Loss, i.e
=/>+=/>+
The parameters of G and F are fixed while training the discriminator forAs shown below, maximize +.>Is at the same time minimized +.>Is a value of (2). />And the same is true.
Maximize=Maximize/>F,/>,X,Y/>=Maximize/>+=Maximize/>+
In the embodiment of the invention, for the calculation of the image projection matrix, the specific process may take OpenGL orthogonal projection transformation as an example, let p points be the points of the original image, p' be the points after projection, x, y, z be the p point coordinate information, n be the distance from the near clipping plane to the camera plane, and f be the distance from the far clipping plane to the camera plane.
Wherein, save z with information of az+b, construct CVV in z direction, z gets az+b= -1 in near clipping plane, and az+b=1 in far clipping plane, namely:
carrying out p' to obtain an orthogonal projection matrix:
in the embodiment of the invention, when the target image information and the bird's eye view are subjected to multi-target shape matching, if targets exist in the target image information, the multi-target distribution shape matching can be normally performed, and if targets do not exist in the target image information, the lane lines or the environment can be subjected to semantic segmentation matching according to a semantic segmentation network, so that a matching result is obtained.
Specifically, for the semantic segmentation network training, a target image information dataset and a bird's eye view (generated above) dataset may be specifically prepared, lane lines are marked, roadside features (trees, fences, lawns, etc.) are marked, and the semantic segmentation network may be trained, where the semantic segmentation network may be a Unet, a SegNet, etc., and the invention is not limited and may be specifically selected according to needs.
In an embodiment of the present invention, as shown in fig. 5, performing a panning zoom calculation according to the first target matching point pair to obtain a panning zoom matrix, including:
s410, performing image scaling on the first target matching point pair according to a scaling factor to obtain a scaling matrix;
s420, performing image translation on the first target matching point pair according to the offset to obtain a translation matrix;
s430, obtaining a translation scaling matrix according to the scaling matrix and the translation matrix.
In the embodiment of the invention, the image scaling is carried out on the first target matching point pair, and the specific implementation process is as follows:
let the image coordinates be (x, y), scaling to be%,/>) Sx and sy are scaling factors, then
Image translation is carried out on the first target matching point pair, and the specific implementation process is as follows:
the image coordinates are set as (x, y), and the translation is carried out to obtain the following formula,/>) Dx and dy are both offsets, then
It should be appreciated that in embodiments of the present invention, the pan scaling matrix is specifically obtained by right multiplying the pan matrix by the scaling matrix.
In an embodiment of the present invention, as shown in fig. 6, calibration is performed according to the translation scaling matrix and the image projection matrix, to obtain a calibration result, including:
s510, carrying out coordinate system conversion according to the translation scaling matrix and the target coordinate information to obtain a camera coordinate system conversion result;
in the embodiment of the invention, specifically, a millimeter wave radar coordinate system is established,/>,/>(X, Y, Z), three-dimensional world coordinate system (X, Y), image coordinate system (X, Y), camera coordinate system ()>,/>,/>) Pixel coordinate system (u, v). Converting the millimeter wave radar coordinate system and the camera coordinate system into a three-dimensional world coordinate system, and further converting into an image coordinate system and pixelsAnd the coordinate system is used for realizing the joint calibration of the millimeter wave radar image and the camera image.
The world coordinate system is converted into a camera coordinate system:
where R represents the rotation matrix of 3*3 and T represents the translation matrix of 3*1.
S520, performing image coordinate system conversion according to the camera coordinate system conversion result to obtain an image coordinate system conversion result;
specifically, the camera coordinate system is converted into an image coordinate system:
where f represents the focal length of the camera, i.e. the distance from the far clipping plane to the camera plane as described above.
S530, performing pixel coordinate system conversion according to the image coordinate system conversion result and the image projection matrix to obtain a pixel coordinate system conversion result.
In an embodiment of the invention, the image coordinate system is converted into a pixel coordinate system:
wherein, the method comprises the following steps of) And (3) representing the coordinates of the origin of the image coordinate system in the pixel coordinate system, wherein dx and dy are the length and width of a single pixel in the image plane, so that the millimeter wave radar coordinate system and the camera coordinate system are in conversion relation:
wherein,representing the normalization parameters.
Referring to fig. 9, according to the automatic calibration method of the radar integrated machine based on the countermeasure generation network, target image information is converted into a bird's-eye view through the countermeasure generation network, target matching is performed through the bird's-eye view in combination with target coordinate information, translation scaling processing is performed on a matched result, target matching is performed through the bird's-eye view in combination with target image information, image projection processing is performed on a matched result, and finally calibration is performed according to the result of the translation scaling processing and the result of the image projection processing, so that a calibration result is obtained. According to the embodiment of the invention, when a radar has a target, a synchronous GAN generated aerial view and a radar target are collected, the aerial view and the radar target are both in a coordinate system of the aerial view, the difference is a translation and a scaling, the target shape is matched, a frame with high matching degree is reserved, after enough point pairs are collected, the calculation of a translation scaling matrix is carried out, and the average reduction error is calculated. And finally multiplying the two matrixes to obtain a calibration matrix 3*3, namely a final calibration result which is expressed by the conversion relation between the final millimeter wave radar coordinate system and the camera coordinate system.
Therefore, the automatic calibration method of the radar integrated machine based on the countermeasure generation network can effectively improve the matching efficiency of reliable point pairs because the image information is converted into the aerial view through the countermeasure generation network, and improve the matching accuracy of the image and radar coordinates to a certain extent because the converted aerial view can separate vehicles. In addition, through the matching of the aerial view and the target image information and the matching of the aerial view and the target coordinate information, the accuracy of target shape matching can be effectively improved, and therefore the final calibration result is improved.
It is to be understood that the above embodiments are merely illustrative of the application of the principles of the present invention, but not in limitation thereof. Various modifications and improvements may be made by those skilled in the art without departing from the spirit and substance of the invention, and are also considered to be within the scope of the invention.

Claims (10)

1. An automatic calibration method of a radar integrated machine based on an countermeasure generation network is characterized by comprising the following steps:
acquiring target image information acquired by a camera device and target coordinate information acquired by a radar device;
performing image conversion on the target image information according to the countermeasure generation network to obtain a bird's eye view;
performing target matching according to the aerial view and the target coordinate information to obtain a plurality of first target matching point pairs, and performing target matching according to the aerial view and the target image information to obtain a plurality of second target matching point pairs;
performing translation scaling calculation according to the first target matching point pair to obtain a translation scaling matrix, and performing image projection calculation according to the second target matching point pair to obtain an image projection matrix;
calibrating according to the translation scaling matrix and the image projection matrix to obtain a calibration result;
performing translation scaling calculation according to the first target matching point pair to obtain a translation scaling matrix, wherein the translation scaling matrix is obtained after performing image scaling and image translation on the first target matching point pair;
performing image projection calculation according to the second target matching point pair to obtain an image projection matrix, wherein the image projection matrix is obtained by combining perspective division after performing projection transformation on target image information in the second target matching point pair;
and calibrating according to the translation scaling matrix and the image projection matrix, wherein the obtaining of the calibration result comprises the coordinate system conversion between the result obtained after the coordinate system conversion according to the translation scaling matrix and the image projection matrix, and the calibration result is obtained.
2. The automatic calibration method of a radar integrated machine based on an countermeasure generation network according to claim 1, wherein performing target matching according to the bird's eye view and the target coordinate information to obtain a plurality of first target matching point pairs includes:
performing target detection on the aerial view according to a target detection algorithm to obtain a target detection result;
performing target shape matching according to the target detection result and the target coordinate information;
and obtaining a plurality of first target matching point pairs according to the target shape matching result.
3. The automatic calibration method of the radar integrated machine based on the countermeasure generation network according to claim 2, wherein the target shape matching is performed according to the target detection result and the target coordinate information, and the method comprises the following steps:
determining the target to be matched according to the target coordinate information;
and carrying out target shape random matching in the target detection result according to the target to be matched to obtain a target shape matching result.
4. The automatic calibration method of a radar integrated machine based on an countermeasure generation network according to claim 3, wherein performing target shape random matching in the target detection result according to the target to be matched to obtain a target shape matching result, comprises:
acquiring a position set to be matched in the target detection result according to the target to be matched;
collecting circulation coordinate values aiming at the positions to be matched, and adjusting the coordinates of the positions to be matched according to the parameter information of each position to be matched in the circulation result;
and calibrating the coordinates of the adjusted position to be matched to obtain a target shape matching result.
5. The automatic calibration method of a radar integrated machine based on an countermeasure generation network according to any one of claims 1 to 4, wherein performing target matching according to the bird's eye view and the target image information to obtain a plurality of second target matching point pairs includes:
respectively carrying out target detection on the target image information and the aerial view according to a target detection algorithm to obtain a target detection result;
when the target detection result is that a target exists, performing target distribution shape matching according to the aerial view and the target image information, and obtaining a plurality of shape matching point pairs according to the target distribution shape matching result;
the plurality of shape matching point pairs are taken as second target matching point pairs.
6. The automatic calibration method of the all-in-one radar machine based on the countermeasure generation network according to claim 5, wherein the image projection matrix is obtained by performing image projection calculation according to the second target matching point pair, and the method comprises the following steps:
performing projective transformation on the target image information in the second target matching point pair to obtain a perspective transformation matrix;
and cutting the regular observers according to the perspective transformation matrix, and performing perspective division on the result obtained after cutting the regular observers to obtain an image projection matrix.
7. The automatic calibration method of a radar integrated machine based on an countermeasure generation network according to any one of claims 1 to 4, wherein performing a panning scaling calculation according to the first target matching point pair to obtain a panning scaling matrix includes:
image scaling is carried out on the first target matching point pair according to a scaling factor, and a scaling matrix is obtained;
performing image translation on the first target matching point pair according to the offset to obtain a translation matrix;
and obtaining a translation scaling matrix according to the scaling matrix and the translation matrix.
8. The automatic calibration method of a radar integrated machine based on an countermeasure generation network according to any one of claims 1 to 4, wherein the calibrating according to the translation scaling matrix and the image projection matrix to obtain a calibration result includes:
performing coordinate system conversion according to the translation scaling matrix and the target coordinate information to obtain a camera coordinate system conversion result;
performing image coordinate system conversion according to the camera coordinate system conversion result to obtain an image coordinate system conversion result;
and performing pixel coordinate system conversion according to the image coordinate system conversion result and the image projection matrix to obtain a pixel coordinate system conversion result.
9. The automatic calibration method of a radar integrated machine based on an countermeasure generation network according to any one of claims 1 to 4, wherein performing image conversion on the target image information according to the countermeasure generation network to obtain a bird's eye view comprises:
training a challenge-generating network;
and inputting the target image information into the countermeasure generation network to perform image conversion, and obtaining a bird's eye view.
10. The automatic calibration method for a radar integrated machine based on an countermeasure generation network according to claim 9, wherein training the countermeasure generation network comprises:
training a GAN network according to a pair of target image information dataset and a bird's eye view dataset, the GAN network comprising at least a generator and a discriminator, the generator and the discriminator being capable of antagonizing and training with respect to each other;
and setting a loss function of the GAN network, and performing constraint training on the GAN network according to the loss function to obtain an countermeasure generation network, wherein the loss function is used for constraining the similarity of the generator and the discriminator.
CN202311775975.7A 2023-12-22 2023-12-22 Automatic calibration method of radar integrated machine based on countermeasure generation network Active CN117456013B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311775975.7A CN117456013B (en) 2023-12-22 2023-12-22 Automatic calibration method of radar integrated machine based on countermeasure generation network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311775975.7A CN117456013B (en) 2023-12-22 2023-12-22 Automatic calibration method of radar integrated machine based on countermeasure generation network

Publications (2)

Publication Number Publication Date
CN117456013A true CN117456013A (en) 2024-01-26
CN117456013B CN117456013B (en) 2024-03-05

Family

ID=89584078

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311775975.7A Active CN117456013B (en) 2023-12-22 2023-12-22 Automatic calibration method of radar integrated machine based on countermeasure generation network

Country Status (1)

Country Link
CN (1) CN117456013B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115015909A (en) * 2022-05-11 2022-09-06 超级视线科技有限公司 Radar data and video data fusion method and system based on perspective transformation
KR20230003803A (en) * 2021-06-30 2023-01-06 주식회사 모빌테크 Automatic calibration through vector matching of the LiDAR coordinate system and the camera coordinate system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20230003803A (en) * 2021-06-30 2023-01-06 주식회사 모빌테크 Automatic calibration through vector matching of the LiDAR coordinate system and the camera coordinate system
CN115015909A (en) * 2022-05-11 2022-09-06 超级视线科技有限公司 Radar data and video data fusion method and system based on perspective transformation

Also Published As

Publication number Publication date
CN117456013B (en) 2024-03-05

Similar Documents

Publication Publication Date Title
CN107977997B (en) Camera self-calibration method combined with laser radar three-dimensional point cloud data
CN106878687A (en) A kind of vehicle environment identifying system and omni-directional visual module based on multisensor
CN113985445A (en) 3D target detection algorithm based on data fusion of camera and laser radar
CN110288659B (en) Depth imaging and information acquisition method based on binocular vision
CN113408584B (en) RGB-D multi-modal feature fusion 3D target detection method
CN109741241B (en) Fisheye image processing method, device, equipment and storage medium
CN107560592A (en) A kind of precision ranging method for optronic tracker linkage target
CN104021538A (en) Object positioning method and device
CN110889829A (en) Monocular distance measurement method based on fisheye lens
CN111998862B (en) BNN-based dense binocular SLAM method
KR20160109761A (en) Method and System for Recognition/Tracking Construction Equipment and Workers Using Construction-Site-Customized Image Processing
CN115376109B (en) Obstacle detection method, obstacle detection device, and storage medium
CN110717445A (en) Front vehicle distance tracking system and method for automatic driving
CN103593641A (en) Object detecting method and device based on stereoscopic camera
KR20170143439A (en) Big data system connecting apparatus and method for constructing 3d spatial information
CN110120013A (en) A kind of cloud method and device
CN112132900A (en) Visual repositioning method and system
CN114511611A (en) Image recognition-based goods heap statistical method and device
Marí et al. To bundle adjust or not: A comparison of relative geolocation correction strategies for satellite multi-view stereo
CN111950524B (en) Orchard local sparse mapping method and system based on binocular vision and RTK
CN108399630B (en) Method for quickly measuring distance of target in region of interest in complex scene
CN117456013B (en) Automatic calibration method of radar integrated machine based on countermeasure generation network
CN113190564A (en) Map updating system, method and device
CN112488022A (en) Panoramic monitoring method, device and system
Chenchen et al. A camera calibration method for obstacle distance measurement based on monocular vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant