CN115619873A - Track tracing-based radar vision automatic calibration method - Google Patents
Track tracing-based radar vision automatic calibration method Download PDFInfo
- Publication number
- CN115619873A CN115619873A CN202211154923.3A CN202211154923A CN115619873A CN 115619873 A CN115619873 A CN 115619873A CN 202211154923 A CN202211154923 A CN 202211154923A CN 115619873 A CN115619873 A CN 115619873A
- Authority
- CN
- China
- Prior art keywords
- radar
- target
- lane
- detection
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/28—Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Radar Systems Or Details Thereof (AREA)
Abstract
The invention discloses a track tracing-based radar vision automatic calibration method, which comprises the steps of obtaining images through video monitoring equipment, inputting the images into a convolution model for target identification, and obtaining state information of each detected target; carrying out image correction on a target detection area to obtain an initial transformation matrix; taking radar multiframe multiple target state information and carrying out track statistics; analyzing track information of the video target and the radar target and reversely pushing lane information; and performing association of two coordinate systems according to the information of the lane where the target is located and the longitudinal relative position information, performing module area division according to the lane information, and calculating according to the association information and the module area to obtain the radar perspective transformation matrix. The method of the invention detects the lane information of the target according to the trace tracing and accurately associates two targets of different coordinate systems according to the front and back relative positions of the target, thereby overcoming the defects of low efficiency, narrow range, point drift and the like of manual calibration point selection and realizing rapid and efficient automatic calibration.
Description
Technical Field
The invention belongs to the field of intelligent traffic detection, particularly relates to the field of multi-view area correction, and particularly relates to a laser sight automatic correction method based on trajectory tracing.
Background
In recent years, various traffic travel tools are increased day by day, so that the road use is gradually saturated, the construction status of a smart city is increased day by day, the real-time monitoring and control of urban road traffic are particularly critical, a radar and video combined road detector is developed, and the defect that a single detector is limited by the environment is overcome. However, the key of the radar detector is the fusion of the two information, and the accurate association of the two coordinate systems is a prerequisite for accurate fusion.
The existing association mode is basically as follows: the method comprises the steps of obtaining radar and image acquisition videos in a period of time, selecting target point pairs in a human eye observation and manual point selection mode, and then calculating a perspective transformation matrix, wherein the method has the following defects: when a plurality of targets are involved, due to the fact that detection visual angles are different and radar is inaccurate in detecting static targets, the number of target points of the static targets cannot be matched, and point-to-point correspondence cannot be accurately carried out; when the point selection is manually clicked, the deviation of the point pair is easy to occur, and the solving of the coordinate plane has errors; the solution of the perspective transformation matrix is obtained by means of fitting of multiple groups of point pairs, the point pairs are not collinear, and the comprehensive coverage cannot be achieved manually, so that the calibration effect is not ideal, repeated calibration is caused, the time is wasted, and the result is inaccurate.
Disclosure of Invention
The invention aims to provide a track tracing-based radar vision automatic calibration method aiming at the defects of manual calibration, which adopts a detection method based on deep learning to detect a video target, adopts image area correction to prepare for image track tracing, carries out image segmentation on target track information of radar and video to reversely push target lane information, associates relative position targets in the same lane of the radar and the video, sets a lane sub-module area threshold value and carries out corresponding perspective transformation matrix calculation. The invention not only ensures the full coverage of the associated points, but also ensures the non-drift of the point selection position, also ensures the corresponding accuracy of the point pairs, and realizes the automation, high efficiency and high accuracy of the radar vision calibration.
The technical solution for realizing the purpose of the invention is as follows: a radar vision automatic calibration method based on track tracing comprises the following steps:
step 1, collecting state information of radar target detection points, including position, speed and state;
step 2, collecting video images of the radar at the same time, loading a target detection model, inputting the video images into the target detection model for target detection, and acquiring state information of a detected target, including position, category and size;
step 3, selecting a video detection area, correcting the detection area by utilizing perspective transformation to obtain a first transformation matrix, and transforming a video detection target to a first correction coordinate system by utilizing the perspective transformation matrix;
step 4, counting the tracks of the video and the radar target within a threshold time t, respectively analyzing the track maps of the radar and the video target to carry out lane reverse thrust, calculating a lane reference line and a detection area boundary of a radar coordinate system, and carrying out equivalent module division on the radar and the video lane;
and 5, selecting the radar vision coordinate system point pairs according to the lane numbers and the longitudinal queuing positions, selecting the end point pairs according to the coverage condition of the lane module area, and then calculating a final transformation matrix according to the point pair group.
Further, in the step 1, the radar target radar detection point state information is structured data obtained after point cloud clustering analysis.
Further, the detection framework of the target in the step 2 is dark net, the target detection model is yolov3-tiny model trained by using the training set, the training data set is from different traffic data scenes and is calibrated according to the type of the detected target, and the TensrT accelerated reasoning framework is used for accelerating reasoning in the reasoning process, namely model verification.
Further, step 3 specifically comprises: selecting an irregular detection area by taking a lane line as a front reference, calculating a first transformation matrix by using the angular point of the area, and realizing mapping transformation of the detection area by using the matrix;
wherein the first transformation matrix solving formula is:
in the formula, x img1 And y img1 For the corrected coordinates, the data are brought into the four vertex coordinates, x, of the image frame img0 And y img0 For the coordinates of the selected detection area, four vertex coordinates, M, are brought into the data as detection area 1 Is a first transformation matrix.
Further, analyzing the trajectory diagram to perform lane backward thrust in step 4 specifically includes:
step 5-1, performing binarization processing on the radar and the video target trajectory graph;
step 5-2, performing further close image processing on the binary image obtained in the step 5-1;
and 5-3, using a canny operator and particle area filtering to realize lane area division, numbering lanes from left to right, and dividing each lane into n area modules.
Further, the lane reference line and the detection area boundary of the radar coordinate system are calculated in step 4: performing longitudinal track accumulation on a radar target track to obtain an accumulated curve, performing smoothing processing on the accumulated curve by using a cubic B-spline curve equation, and starting to take extreme points from two sides to the center of the data after superposition so as to determine the left boundary, the right boundary and the lower boundary of a radar detection area;
wherein, the longitudinal accumulation formula is:
the size of the image is mxn, and the image is mx1 one-dimensional data after longitudinal superposition, (x) i ,y i ) Is the overlay data of the ith column, (x) i ,y i ) The pixel value of the ith row and the jth column of the original image is obtained;
the left boundary, the right boundary and the lower boundary are determined in the following mode: the x values of the left and right extreme points are used as the x values at the two ends of the left and right boundaries and the lower boundary, and the smaller y value of the left and right extreme points is used as the y value of the lower boundary and the left and right boundaries.
Further, in step 5, the selection of the radar-vision coordinate system point pair and the selection of the end point pair are performed according to the lane number and the longitudinal queuing position, specifically: and classifying lanes of the data targets collected by each frame, longitudinally sorting the detection targets in the video lane and the radar lane respectively, corresponding detection area modules to the detection target areas, selecting point pairs and inserting the point pairs into a target pair queue when the detection target lane numbers and the module numbers in the two coordinate systems are corresponding, finishing target point pair selection when each area module has a target point pair corresponding to each other, and calculating a final perspective transformation matrix.
Compared with the prior art, the invention has the following remarkable advantages:
(1) The method for selecting the track reverse pushing carries out point pair selection under the condition of determining the relative position based on the target, and overcomes the defect that the radar points corresponding to the set video target point cannot be accurately obtained when multiple targets are manually selected.
(2) The invention selects the lower edge of the center of the video detection frame as the detection target point, fully considers the problems of target drift and lane change caused by the fact that the selected point of the video detection target is not on the same plane due to the size, and enhances the accuracy of perspective transformation compared with the method that the selected point is manually selected as the video target detection point.
(3) When the radar coordinate system reversely pushes the lane, the stop line is taken as the front edge of the lane, the non-uniformity of the video and the radar due to the visual angle is fully considered, and the one-to-one correspondence of the targets is ensured.
(4) The type of the target of the video detection is considered, when one radar module area corresponds to a plurality of detection targets and the video detection module area only has one detection target, redundant points can be provided according to the type.
Compared with the prior art, the method has the advantages that accurate point pair selection is carried out according to the relative position of the target, pertinence is strong, and robustness is high.
The present invention is described in further detail below with reference to the attached drawing figures.
Drawings
Fig. 1 is a schematic flow chart of an automatic radar vision calibration method based on trajectory tracing in an embodiment.
Fig. 2 is a schematic diagram of a radar target track of the automatic radar vision calibration method based on track tracing in one embodiment.
Fig. 3 is a schematic diagram of a longitudinal accumulated smooth curve of a radar target track in the automatic radar vision calibration method based on track tracing in one embodiment.
Fig. 4 is a schematic diagram of a radar lane backward-pushing result of the automatic radar vision calibration method based on the track tracing in one embodiment.
Fig. 5 is a schematic diagram illustrating a video detection area selection result of the automatic radar vision calibration method based on track tracing in one embodiment.
Fig. 6 is a schematic diagram illustrating a detection area correction result of the automatic radar vision calibration method based on trajectory tracing in one embodiment.
Fig. 7 is a schematic diagram of a video detection target recognition result of the trajectory-tracing-based radar vision automatic calibration method in an embodiment.
Fig. 8 is a schematic diagram of the Lei Shibiao determination result of the automatic radar vision calibration method based on the trajectory tracing in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It should be noted that if the description of "first", "second", etc. is provided in the embodiment of the present invention, the description of "first", "second", etc. is only for descriptive purposes and is not to be construed as indicating or implying relative importance or implicitly indicating the number of indicated technical features. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one of the feature. In addition, technical solutions between various embodiments may be combined with each other, but must be realized by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope of the present invention.
In an embodiment, with reference to fig. 1, there is provided a method for automatic radar vision calibration based on trajectory tracing, including the following steps:
step 1, collecting state information of radar target detection points, including position, speed and state;
step 2, collecting video images of the radar at the same time, loading a target detection model, inputting the video images into the target detection model for target detection, and acquiring state information of a detected target, wherein the state information comprises a position, a category and a size;
step 3, selecting a video detection area, correcting the detection area by utilizing perspective transformation to obtain a first transformation matrix, and transforming a video detection target to a first correction coordinate system by utilizing the perspective transformation matrix;
step 4, counting the tracks of the video and the radar target within a threshold time t, respectively analyzing the track maps of the radar and the video target to carry out lane reverse thrust, calculating a lane reference line and a detection area boundary of a radar coordinate system, and carrying out equivalent module division on the radar and the video lane;
and 5, selecting the radar vision coordinate system point pairs according to the lane numbers and the longitudinal queuing positions, selecting the end point pairs according to the coverage condition of the lane module area, and then calculating a final transformation matrix according to the point pair group.
In the embodiment of the invention, the radar target information is obtained by the information after structuring, and the information comprises: speed, position, status; the convolution model used for carrying out target detection and identification on the video frame is Darknet, and the identified target comprises the following components by combining with a Yolov3-tiny algorithm: car, bus, bike, struck, and accelerates reasoning using a TensorRT acceleration model in target detection; manually selecting the video detection area once from the stop line, and correcting the image; converting the target into a first coordinate system for track statistics, and avoiding the error phenomenon of original image target lane statistics and segmentation; analyzing track accumulation information during backward thrust of the radar lane, and calculating the position of a stop line to enable the video and the radar lane reference line to be uniform; the radar lane and the video lane are equally divided into n module areas, and the areas, the relative positions of the targets and the types of the targets are used as point-to-target selection conditions, so that the targets are accurately positioned.
The invention provides a track tracing-based radar vision automatic calibration method aiming at comprehensive transportation management system requirements of all-around, real-time, accurate and efficient intelligent transportation, improving the safety level of transportation and lightening traffic jam. Firstly, collecting radar target signals; obtaining a target detection result of an image layer through video information; carrying out image correction according to the detection area to obtain a first correction coordinate system, and transforming the video detection target to the first correction coordinate system; reversely pushing the lane according to the target track of the radar coordinate system and the first correction coordinate system; dividing a lane module area, and performing point pair selection by using lane numbers and relative position information of the module area and a target; and when all module areas have the selected points, performing perspective transformation matrix calculation, associating the two coordinate systems of the video and the radar, and completing the calibration tasks of the video and the radar.
Radar lane reverse thrust
The radar lane reverse thrust processing is specific: acquiring target structured data including position, speed and state by a radar, and filtering out static targets according to state information; drawing a radar target track graph in a time period t, and performing longitudinal pixel accumulation processing on the track graph as shown in FIG. 2; the cumulative formula is as follows:
and (3) carrying out cubic spline interpolation smoothing processing on the accumulated data, wherein the processing formula is as follows:
y=a 0 +a 1 x+a 2 x 2 +a 3 x 3
as shown in fig. 3, the smoothing result is set with a threshold value, and respective first minimum value points are searched from the two ends of the curve to the inner side, so as to determine the front edge line of each lane and the left and right boundary lines of the radar detection area; carrying out image processing on the track in the detection area, specifically: carrying out binarization processing on the track, then carrying out close operation to remove small cracks, carrying out canny edge extraction, carrying out particle surface sub-filtering processing on the closed region to obtain the boundary range of the lane, and displaying the obtained lane in the original image as shown in figure 4; and numbering the lanes from left to right, and performing area module division on each exit lane to obtain the information of the radar grid lanes.
Video lane reverse push
The detection area is marked on the acquired image, as shown in fig. 5, due to imaging, viewing angle and other reasons, the lane is gathered at a point, and in order to change the current situation, the coordinate information of the detection area is used for image correction, and the correction formula is as follows:
inputting 4 coordinate points of the original detection area and four vertex coordinates of the image into the formula, calculating a first perspective transformation matrix, transforming all pixel points of the detection area to a first correction coordinate system through the perspective transformation matrix, and finishing the initial transformation, wherein the transformation result is shown in fig. 6.
Performing target detection on an original image, firstly obtaining a convolution detection model, selecting a Darknet model and training by adopting a Yolov3-tiny algorithm, selecting a coco data set to perform training of a convolution model weight parameter, setting a modified parameter configuration file batch size to be 64, setting a learning qualification rate to be 0.001, generating more training samples by setting a rotation angle, adjusting saturation, exposure and tone, selecting steps by a learning rate adjustment strategy, and changing the attenuation of learning rate when training reaches a certain number of times. The trained convolution model is the combination of a network model and weight parameters, a model file is transmitted into a TensorRT deep learning framework for analysis, a generated ONNX general model is converted into a TRT model for accelerated reasoning and deployment, and FIG. 7 shows a target identification result to obtain position, size, category and probability information of a target.
Screening the obtained detection target according to types, wherein the screened type targets are as follows: and (3) car, bus, bike and truck, converting the screened detection target into a first correction coordinate system by using a first perspective transformation matrix, wherein the coordinate of a target point selects the lower edge position of a detection frame, and the formula is as follows:
counting target tracks in a period of time, and processing images according to the target tracks, specifically: image binarization, close operation, canny edge detection, image segmentation, particle area filtering and finally lane information extraction. And sequencing the extracted lanes from left to right, and further dividing module areas to obtain video grid lane information.
Rake view target point pair selection
At the same time, determining lane numbers and module area numbers of radar targets and video targets, traversing lanes and module areas where the targets exist from the radar targets, when the number of the targets in the same module area is the same, carrying out point-to-point matching according to the longitudinal relative position, when the number of points is not matched and if the number of video points in the same area is less than that of the radar points, considering the detection type of the video, and if bus and truck types exist and only one type exists, considering the phenomenon of point-to-multipoint of large targets, sorting the video module area according to the longitudinal direction, carrying out point-to-point selection on the small target objects and the radar targets in the relative sequence, sorting the rest radar points from small to large in the longitudinal direction, selecting the small points corresponding to the large video target points, and not carrying out target association on the module areas corresponding to the rest conditions. And when all the module areas are covered by the targets, stopping point pair selection, inputting the selected point pair queues into a perspective transformation matrix calculation formula, and calculating a final perspective transformation matrix to solve. Fig. 8 shows the target association effect under this method.
In one embodiment, there is provided a trajectory tracing-based radar vision automatic calibration system, the system comprising, executed in sequence:
the system comprises a first module, a second module and a third module, wherein the first module is used for collecting state information of radar target detection points, including positions, speeds and states;
the second module is used for collecting video images of the radar at the same time, loading a target detection model, inputting the video images into the target detection model for target detection, and acquiring state information of a detected target, wherein the state information comprises a position, a category and a size;
the third module is used for selecting a video detection area and correcting the detection area by utilizing perspective transformation to obtain a first transformation matrix;
the fourth module is used for counting the tracks of the video and the radar target within the threshold time t, respectively analyzing the track maps of the radar and the video target to carry out lane reverse thrust, carrying out equivalent module division on the radar and the video lane, and calculating a lane reference line and a detection area boundary of a radar coordinate system;
and the fifth module is used for performing radar coordinate system point pair selection and end point pair selection according to the lane number and the longitudinal queuing position, and then calculating a final transformation matrix according to the point pair group.
For specific limitations of the trajectory-tracing-based radar vision automatic calibration system, reference may be made to the above limitations of the trajectory-tracing-based radar vision automatic calibration method, which are not described herein again. All modules in the above-mentioned track tracing-based radar vision automatic calibration system can be realized by software, hardware and their combination in whole or in part. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
The foregoing illustrates and describes the principles, general features, and advantages of the present invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are given by way of illustration of the principles of the present invention, but that various changes and modifications may be made without departing from the spirit and scope of the invention, and such changes and modifications are within the scope of the invention as claimed.
Claims (10)
1. A radar vision automatic calibration method based on track tracing is characterized by comprising the following steps:
step 1, collecting state information of radar target detection points, including position, speed and state;
step 2, collecting video images of the radar at the same time, loading a target detection model, inputting the video images into the target detection model for target detection, and acquiring state information of a detected target, wherein the state information comprises a position, a category and a size;
step 3, selecting a video detection area, correcting the detection area by utilizing perspective transformation to obtain a first transformation matrix, and transforming a video detection target to a first correction coordinate system by utilizing the perspective transformation matrix;
step 4, counting the tracks of the video and the radar target within a threshold time t, respectively analyzing the track maps of the radar and the video target to carry out lane reverse thrust, calculating a lane reference line and a detection area boundary of a radar coordinate system, and carrying out equivalent module division on the radar and the video lane;
and 5, performing radar-vision coordinate system point pair selection according to the lane number and the longitudinal queuing position, finishing the point pair selection according to the coverage condition of the lane module area, and then calculating a final transformation matrix according to the point pair group.
2. The automatic radar vision calibration method based on the track tracing as recited in claim 1, wherein the radar target radar detection point state information in the step 1 is structured data obtained after point cloud cluster analysis.
3. The automatic radar vision calibration method based on the track tracing is characterized in that a detection framework of the target in the step 2 is dark net, a target detection model is yolov3-tiny model trained by a training set, the training data set is from different traffic data scenes and is calibrated according to the type of the detected target, and a TensorRT accelerated reasoning framework is used for accelerating reasoning in a reasoning process, namely model verification.
4. The automatic radar vision calibration method based on the track tracing as claimed in claim 1, wherein the step 3 is specifically: selecting an irregular detection area by taking a lane line as a front reference, calculating a first transformation matrix by using the angular point of the area, and realizing mapping transformation of the detection area by using the matrix;
wherein the first transformation matrix solving formula is:
in the formula, x img1 And y img1 For the corrected coordinates, the data are brought into the four vertex coordinates, x, of the image frame img0 And y img0 For the coordinates of the selected detection area, four vertex coordinates, M, are brought into the data as detection area 1 Is a first transformation matrix.
5. The trajectory-tracing-based radar vision automatic calibration method according to claim 1, wherein the step 4 of analyzing the trajectory diagram for lane backward thrust specifically comprises the steps of:
step 5-1, performing binarization processing on the radar and the video target trajectory graph;
step 5-2, performing further close image processing on the binary image obtained in the step 5-1;
and 5-3, using a canny operator and particle area filtering to realize lane area division, numbering lanes from left to right, and dividing each lane into n area modules.
6. The automatic radar vision calibration method based on the track tracing as claimed in claim 1, wherein the lane reference line and the detection area boundary of the radar coordinate system are calculated in step 4: performing longitudinal track accumulation on a radar target track to obtain an accumulated curve, performing smoothing processing on the accumulated curve by using a cubic B-spline curve equation, and starting to take extreme points from two sides to the center of the data after superposition so as to determine the left boundary, the right boundary and the lower boundary of a radar detection area;
wherein, the longitudinal accumulation formula is:
the size of the image is mxn, and the image is mx1 one-dimensional data after longitudinal superposition, (x) i ,y i ) Is the overlay data of the ith column, (x) i ,y i ) The pixel value of the ith row and the jth column of the original image is obtained;
the left boundary, the right boundary and the lower boundary are determined in the following modes: the x values of the left and right extreme points are used as the x values at the two ends of the left and right boundaries and the lower boundary, and the smaller y value of the left and right extreme points is used as the y value of the lower boundary and the left and right boundaries.
7. The track tracing-based radar vision automatic calibration method according to claim 1, wherein in step 5, a radar vision coordinate system point pair is selected according to a lane number and a longitudinal queuing position, and the selection is performed according to a lane module area coverage condition end point pair, specifically: and classifying lanes of the data targets collected by each frame, longitudinally sorting the detection targets in the video lane and the radar lane respectively, corresponding detection area modules to the detection target areas, selecting point pairs and inserting the point pairs into a target pair queue when the detection target lane numbers and the module numbers in the two coordinate systems are corresponding, finishing target point pair selection when each area module has a target point pair corresponding to each other, and calculating a final perspective transformation matrix.
8. The automatic radar vision calibration system based on the track tracing source of the method as claimed in any one of claims 1 to 7, wherein the system comprises the following steps which are executed in sequence:
the system comprises a first module, a second module and a third module, wherein the first module is used for collecting state information of radar target detection points, including positions, speeds and states;
the second module is used for collecting video images of the radar at the same time, loading a target detection model, inputting the video images into the target detection model for target detection, and acquiring state information of a detected target, wherein the state information comprises a position, a category and a size;
the third module is used for selecting a video detection area and correcting the detection area by utilizing perspective transformation to obtain a first transformation matrix;
the fourth module is used for counting the tracks of the video and the radar target within the threshold time t, respectively analyzing the track maps of the radar and the video target to carry out lane reverse thrust, carrying out equivalent module division on the radar and the video lane, and calculating a lane reference line and a detection area boundary of a radar coordinate system;
and the fifth module is used for performing the radar-view coordinate system point pair selection and the end point pair selection according to the lane number and the longitudinal queuing position, and then calculating a final transformation matrix according to the point pair group.
9. A computer arrangement comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211154923.3A CN115619873A (en) | 2022-09-21 | 2022-09-21 | Track tracing-based radar vision automatic calibration method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211154923.3A CN115619873A (en) | 2022-09-21 | 2022-09-21 | Track tracing-based radar vision automatic calibration method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115619873A true CN115619873A (en) | 2023-01-17 |
Family
ID=84857766
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211154923.3A Pending CN115619873A (en) | 2022-09-21 | 2022-09-21 | Track tracing-based radar vision automatic calibration method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115619873A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117197182A (en) * | 2023-11-07 | 2023-12-08 | 华诺星空技术股份有限公司 | Lei Shibiao method, apparatus and storage medium |
-
2022
- 2022-09-21 CN CN202211154923.3A patent/CN115619873A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117197182A (en) * | 2023-11-07 | 2023-12-08 | 华诺星空技术股份有限公司 | Lei Shibiao method, apparatus and storage medium |
CN117197182B (en) * | 2023-11-07 | 2024-02-27 | 华诺星空技术股份有限公司 | Lei Shibiao method, apparatus and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113421289B (en) | High-precision vehicle track data extraction method for overcoming unmanned aerial vehicle shooting disturbance | |
CN112069944B (en) | Road congestion level determining method | |
CN107180239A (en) | Line of text recognition methods and system | |
CN109284674A (en) | A kind of method and device of determining lane line | |
CN107577996A (en) | A kind of recognition methods of vehicle drive path offset and system | |
CN105718872B (en) | Auxiliary method and system for rapidly positioning lanes on two sides and detecting vehicle deflection angle | |
CN107609486A (en) | To anti-collision early warning method and system before a kind of vehicle | |
CN110889328B (en) | Method, device, electronic equipment and storage medium for detecting road traffic condition | |
CN104766058A (en) | Method and device for obtaining lane line | |
CN108205667A (en) | Method for detecting lane lines and device, lane detection terminal, storage medium | |
CN111149131B (en) | Dividing line recognition device | |
CN112347817B (en) | Video target detection and tracking method and device | |
CN109543493A (en) | A kind of detection method of lane line, device and electronic equipment | |
CN110287907A (en) | A kind of method for checking object and device | |
CN113011285B (en) | Lane line detection method and device, automatic driving vehicle and readable storage medium | |
CN111738071B (en) | Inverse perspective transformation method based on motion change of monocular camera | |
CN112464933B (en) | Intelligent identification method for weak and small target through foundation staring infrared imaging | |
CN104899892A (en) | Method for quickly extracting star points from star images | |
CN115685102A (en) | Target tracking-based radar vision automatic calibration method | |
CN115619873A (en) | Track tracing-based radar vision automatic calibration method | |
CN112149471B (en) | Loop detection method and device based on semantic point cloud | |
CN115841633A (en) | Power tower and power line associated correction power tower and power line detection method | |
CN115100616A (en) | Point cloud target detection method and device, electronic equipment and storage medium | |
CN118247359A (en) | Automatic calibration method and device for fish-eye camera, computer equipment and storage medium | |
CN114724119B (en) | Lane line extraction method, lane line detection device, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |