CN117197182B - Lei Shibiao method, apparatus and storage medium - Google Patents

Lei Shibiao method, apparatus and storage medium Download PDF

Info

Publication number
CN117197182B
CN117197182B CN202311466239.3A CN202311466239A CN117197182B CN 117197182 B CN117197182 B CN 117197182B CN 202311466239 A CN202311466239 A CN 202311466239A CN 117197182 B CN117197182 B CN 117197182B
Authority
CN
China
Prior art keywords
target
radar
video
point
targets
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311466239.3A
Other languages
Chinese (zh)
Other versions
CN117197182A (en
Inventor
王兴
何鑫
刘柯
李飞
陈杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huanuo Xingkong Technology Co ltd
Original Assignee
Huanuo Xingkong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huanuo Xingkong Technology Co ltd filed Critical Huanuo Xingkong Technology Co ltd
Priority to CN202311466239.3A priority Critical patent/CN117197182B/en
Publication of CN117197182A publication Critical patent/CN117197182A/en
Application granted granted Critical
Publication of CN117197182B publication Critical patent/CN117197182B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses a radar calibration method, a device and a storage medium, wherein the method comprises the steps of performing frame rate alignment and time synchronization on an acquired radar target track and a video stream; obtaining video target information according to the video stream and the target detection model, establishing a video target tracker according to the video target information, and obtaining a video target track in real time according to the video target tracker; generating an associated area according to the radar target information and the video target information after frame rate alignment and time synchronization; generating a set of target points according to the association region and the target tracking ID; and determining a rotation matrix and a translation matrix between the radar and the vision sensor according to the calibration point set, so as to realize the radar calibration. According to the method, the tracking tracks of the radar target and the video target are associated through the association region, the set of calibration points is obtained according to the tracking tracks, and the number of the calibration points and the acquisition efficiency of the calibration points are ensured.

Description

Lei Shibiao method, apparatus and storage medium
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to a Lei Shibiao determination method, lei Shibiao determination equipment and a storage medium based on track association.
Background
In limited entrances, key management areas, and level crossings and road sections, statistical analysis of target traffic, target speed, target position, and target type within a scene is required. The video stream transmitted by the vision sensor (for example, a camera) is usually processed by using an image detection algorithm, so that the appearance information, semantic information and the number of targets in the image can be detected, but the target speed and the actual position of the targets cannot be obtained, and the video detection effect can be greatly reduced in severe weather such as foggy days, heavy rain and the like at night when the light is insufficient. The millimeter wave radar can utilize Doppler effect to realize distance measurement and speed measurement of moving targets, is not influenced by illumination, but cannot accurately classify and identify targets due to insufficient information quantity acquired by the millimeter wave radar. In order to combine the advantages of the vision sensor and the millimeter wave radar, data processing and information fusion are required to be carried out on the data acquired by the vision sensor and the millimeter wave radar so as to realize intelligent target analysis under multiple scenes and multiple time periods.
In order to accurately perform information fusion on data acquired by the vision sensor and the radar, the vision sensor and the radar are required to be calibrated, the calibration mode of the radar and the vision sensor is that calibration points are selected at present, and a rotation matrix and a translation matrix between the radar and the vision sensor are calculated according to a plurality of groups of calibration points. The accuracy of the calibration point acquisition will directly affect the accuracy of the calibration. At present, the acquisition mode of the calibration point is artificial point selection, and the specific operation is to observe a radar target in a radar coordinate system and an image target in an image coordinate system, and select the radar target corresponding to the image target according to the information such as the mutual position relation between the targets, the target type, the target speed and the like; and estimating the position of the radar target on the actual target according to experience, and mapping the position onto an image to obtain a plurality of groups of calibration points. The manual selection point acquisition mode mainly has the following defects:
(1) When the number of targets in a scene is large or the distance between the targets is long and the number of targets detected by the radar and the visual sensor is inconsistent, manual selection of the calibration points can be confused, corresponding radar targets cannot be effectively found, and accordingly the calibration points are erroneously selected, so that the calibration accuracy is low;
(2) When the number of targets in the scene is large and the distance from the vision sensor is long, the targets cannot be effectively distinguished manually due to the fact that the pixels occupied by the targets in the image are too small, and error selection of the calibration points is easy to cause;
(3) When the mounting positions of the radar and the vision sensor are changed, the calibration points need to be selected again manually, and the calibration efficiency is low.
The method comprises the steps of drawing a polygonal frame in an image, and manually measuring radar coordinate points corresponding to corner points of the polygonal frame in the image to obtain a plurality of groups of calibration points; the method mainly has the following problems: at intersections or road sections with high traffic flows, no measurement conditions are provided. In summary, the main disadvantage of manual point selection is inaccurate point selection, low efficiency of point acquisition, and high application environment requirements.
Along with the rapid development of the neural network, the calibration method of the radar and the vision sensor based on the neural network is developed, the method divides a calibration point set into a training set and a testing set, performs multi-round training on a model on the training set, performs verification on the testing set, obtains the model with the optimal fitting effect through repeated tests, and finally performs calibration by utilizing the optimal model, but when the number of targets in a scene is small, a large number of calibration points cannot be obtained, the over fitting of the model is easy to be caused, the calibration precision in practical application is low, and the requirement of the radar fusion cannot be met.
Disclosure of Invention
The invention aims to provide a method, equipment and a storage medium for calibrating a laser, which are used for solving the problems of inaccurate calibration and low calibration efficiency caused by manual point selection in the traditional calibration method and the problem of low calibration quantity precision caused by less number of calibration points in the calibration method based on a neural network.
The invention solves the technical problems by the following technical scheme: a Lei Shibiao calibration method based on track association, the calibration method comprising the steps of:
acquiring radar target information detected by a radar, establishing a radar target tracker according to the radar target information, and acquiring a radar target track in real time according to the radar target tracker;
Acquiring a video stream detected by a visual sensor;
performing frame rate alignment and time synchronization on the radar target track and the video stream;
obtaining video target information according to the video stream and a target detection model, establishing a video target tracker according to the video target information, and obtaining a video target track in real time according to the video target tracker;
generating an associated area according to the radar target information and the video target information after frame rate alignment and time synchronization;
generating a set of target points according to the association region and the target tracking ID;
and determining a rotation matrix and a translation matrix between the radar and the vision sensor according to the calibration point set, so as to realize the radar calibration.
Further, performing frame rate alignment on the radar target track and the video stream specifically includes:
taking the frame rate of the video stream as a reference frame rate;
when the first reference frame rateiWhen the frame has video image and radar target is not received, the radar target of the nearest frame is taken as the first frameiA radar target corresponding to the frame video image;
when the first reference frame rateiWhen the frame has video image and radar target is received, the received radar target is taken as the first frame iThe method comprises the steps that a radar target corresponding to a frame video image is updated and track caching is carried out on the radar target according to a radar target tracking ID;
and when the continuous multiframe does not receive the radar target, completing the frame rate alignment of the radar target track and the video stream.
Further, time synchronizing the radar target track with the video stream specifically includes:
finding out a radar target nearest to each frame of video image in time from the radar target track;
calculating a time difference between a time stamp of each frame of video image and a time stamp of a corresponding nearest radar target;
and when the time difference is smaller than a time threshold, updating the radar target according to the movement speed of the radar target, and realizing time synchronization of the radar target track and the video stream.
Further, generating an associated area according to the radar target information and the video target information after frame rate alignment and time synchronization, specifically including:
determining a plurality of pairs of associated targets according to the radar target information and the video target information;
when the number of the associated targets is larger than a first number threshold, selecting the associated target with the highest confidence from a plurality of pairs of the associated targets; the associated target with the highest confidence coefficient is the video target with the highest confidence coefficient in the associated target;
And generating an association region according to the association target with the highest confidence, wherein the association region comprises a detection region of the video target with the highest confidence and a detection region of the radar target corresponding to the video target.
Further, the radar target information and the video target information include the number of targets, the positions of the targets and the motion states of the targets, and the determining of the multiple pairs of associated targets according to the radar target information and the video target information specifically includes:
judging whether the number of radar targets is equal to the number of moving video targets or not;
when the number of radar targets is equal to the number of moving video targets, judging whether the relative positions of the radar targets under a radar coordinate system are the same as the relative positions of the moving video targets under a pixel coordinate system;
when the relative position of the radar target under the radar coordinate system is the same as the relative position of the moving video target under the pixel coordinate system, judging whether the moving direction of the radar target under the radar coordinate system is the same as the moving direction of the moving video target under the pixel coordinate system;
and when the moving direction of the radar target under the radar coordinate system is the same as the moving direction of the moving video target under the pixel coordinate system, determining that the radar target and the video target are a pair of associated targets.
Further, determining a video target motion state according to the video target track specifically includes:
for each video target track, extracting a target frame from the video target track every a period of time, and calculating the intersection ratio of two adjacent target frames;
comparing each cross-over ratio with a cross-over ratio threshold value to obtain the number of the cross-over ratios larger than the cross-over ratio threshold value;
calculating the ratio between the number of which the cross ratio is larger than the cross ratio threshold value and the total number of the cross ratios;
when the ratio is greater than a ratio threshold, the video target is in a static state; and when the ratio is smaller than or equal to a ratio threshold, the video target is in a motion state.
Further, generating a set of target points according to the association area and the target tracking ID, specifically including:
at the current moment, when radar targets and video targets appear in the association area, the number of the radar targets is the same as that of the moving video targets, the relative positions of the radar targets under a radar coordinate system are the same as those of the moving video targets under a pixel coordinate system, and the radar targets in the association area are associated with the video targets according to the radar target tracking ID and the video target tracking ID to obtain a pair of association tracking targets; each pair of associated tracking targets comprises a radar target and a video target, and the relative position of the radar target under a radar coordinate system is the same as the relative position of the video target under a pixel coordinate system;
For each pair of associated tracking targets, reserving a radar target track and a video target track of the associated tracking targets in a period from the appearance of the targets to the disappearance of the targets;
respectively carrying out filtering treatment on the radar target track and the video target track of the associated tracking target, and taking the radar target track and the video target track of the associated tracking target after the filtering treatment as a calibration point track at the current moment; storing all the marked point tracks at the current moment into a marked point collection;
judging whether the number of the target point tracks in the target point collection reaches a second number threshold, and if so, outputting the target point collection; otherwise, all the target point tracks at the next moment are obtained, and all the target point tracks at the next moment are stored in the target point collection until the number of the target point tracks in the target point collection reaches a second number threshold.
Further, determining a rotation matrix and a translation matrix between the radar and the vision sensor according to the set of calibration points, specifically comprising:
step 7.1: calibrating the visual sensor to obtain an internal reference matrix of the visual sensor;
step 7.2: clustering all the calibration points in the calibration point set according to the distance under a radar coordinate system to obtain different clusters;
Step 7.3: randomly selecting a target point from each class cluster as a current wheel target point, wherein each target point comprises a radar target point and a video target point;
step 7.4: calculating a rotation matrix and a translation matrix between the radar and the vision sensor according to all the current wheel standard points;
step 7.5: calculating the current calibration precision according to the internal reference matrix, the rotation matrix and the translation matrix;
step 7.6: when the current calibration precision is larger than the precision threshold, outputting a rotation matrix and a translation matrix corresponding to the current calibration precision; and when the current calibration precision is smaller than or equal to the precision threshold, switching to the step 7.3.
Further, the specific calculation process of the current calibration precision comprises the following steps:
according to the internal reference matrix, the rotation matrix and the translation matrix, mapping the radar target point of each calibration point in the calibration point set to a pixel coordinate system to obtain a radar target mapping point;
calculating the intersection ratio of a target frame of a radar target mapping point and a target frame of a video target point of the target point;
according to the internal reference matrix, the rotation matrix and the translation matrix, mapping the video target point of each calibration point in the calibration point set to a radar coordinate system to obtain a video target mapping point;
Calculating the Euclidean distance between the video target mapping point and the radar target point of the target point;
obtaining the distance mapping precision according to the Euclidean distance and the distance threshold value;
calculating the current calibration precision according to the intersection ratio and the distance mapping precision, wherein the specific formula is as follows:
wherein A is T Is the firstTThe calibration accuracy of the wheel is achieved,Nto index the number of points in the point set, TBMA i To set the first point of the index setiThe intersection ratio of the target frame of the radar target mapping point of each target point and the target frame of the video target point, TRMA i To set the first point of the index setiThe distance mapping accuracy of the individual calibration points,Das a threshold value of the distance,d i to set the first point of the index setiEuclidean distance between video target mapping points and radar target points of the calibration points.
Based on the same conception, the invention also provides an electronic device, comprising:
a memory for storing a computer program;
a processor for implementing the Lei Shibiao method as described above when executing the computer program.
Based on the same conception, the present invention also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor implements the Lei Shibiao method as described above.
Advantageous effects
Compared with the prior art, the invention has the advantages that:
according to the method, the association region is generated through the radar target information and the video target information, the calibration point track is generated through the association region, and the accuracy of the association region is ensured through the constraint on the target information when the association region is generated, so that the accuracy of the calibration point selection is ensured; according to the invention, the calibration points are acquired without manual participation, so that calibration errors caused by inaccurate manual selection of the calibration points are eliminated, and the acquisition efficiency of the calibration points is improved;
according to the method, the tracking tracks of the radar target and the video target are associated through the association region, the set of calibration points is obtained according to the tracking tracks, and the number of the calibration points and the acquisition efficiency of the calibration points are ensured; clustering is carried out on the set of calibration points, the calibration points are selected from different clustering clusters, then a rotation matrix and a translation matrix between the radar and the vision sensor are calculated according to the calibration points, and the rotation matrix and the translation matrix are updated according to the calibration precision of all points in the set of calibration points, so that the final calibration precision is optimal, the rotation matrix and the translation matrix can cover different distance segments in an application scene, and the accuracy of the radar fusion is improved.
Drawings
In order to more clearly illustrate the technical solutions of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawing in the description below is only one embodiment of the present invention, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method according to an embodiment Lei Shibiao of the invention;
FIG. 2 is a schematic diagram of frame rate alignment of radar target trajectories with a video stream in an embodiment of the invention;
FIG. 3 is a flow chart of the associated region generation in an embodiment of the invention;
FIG. 4 is a schematic diagram of an associated region in an embodiment of the invention;
FIG. 5 is a schematic diagram of associating tracking targets with their trajectories in an embodiment of the invention;
FIG. 6 is a schematic diagram of various clusters after clustering of the set of identified clusters in an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made more apparent and fully by reference to the accompanying drawings, in which it is shown, however, only some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The technical scheme of the present application is described in detail below with specific examples. The following embodiments may be combined with each other, and some embodiments may not be repeated for the same or similar concepts or processes.
As shown in fig. 1, the Lei Shibiao determining method based on track association provided by the embodiment of the invention comprises the following steps:
step 1: acquiring radar target information detected by a radar, establishing a radar target tracker according to the radar target information, and acquiring a radar target track in real time according to the radar target tracker;
step 2: acquiring a video stream detected by a visual sensor;
step 3: performing frame rate alignment and time synchronization on a radar target track and a video stream;
step 4: obtaining video target information according to the video stream and the target detection model, establishing a video target tracker according to the video target information, and obtaining a video target track in real time according to the video target tracker;
step 5: generating an associated area according to the radar target information and the video target information after frame rate alignment and time synchronization;
step 6: generating a set of target points according to the association region and the target tracking ID;
step 7: and determining a rotation matrix and a translation matrix between the radar and the vision sensor according to the calibration point set, so as to realize the radar calibration.
In this embodiment, the vision sensor is a high definition camera. In step 1 and step 4, target tracking refers to predicting a previous frame detection result of a radar target or a video target by using a kalman filter, calculating a cost matrix according to the target predicted by the previous frame and the target detected by the current frame, then searching an optimal matching result by using a hungarian matching algorithm to realize the correlation of the previous frame and the next frame of the target, repeating the above processes to form a target track, and endowing a target tracking ID, namely a target tracking serial number, based on the target track. The establishment of the radar target tracker and the video target tracker is the prior art, and specific reference can be made to patent literature with the application publication number of CN115685102A, namely a radar automatic calibration method based on target tracking.
Before the associated region is generated, frame rate alignment is performed and then time synchronization is performed. If the frame rates are not aligned or the time is not synchronous, a large time difference exists between the radar target and the corresponding video target, so that the target cannot be found accurately, and calibration is inaccurate. The frame rate alignment is to make the frame rate of the radar target in the radar target track the same as the frame rate of the video frame by nearest neighbor interpolation. In step 3, as shown in fig. 2, frame rate alignment is performed on the radar target track and the video stream, which specifically includes:
Taking the frame rate of the video stream as a reference frame rate;
when the first reference frame rateiWhen the frame has video image and radar target is not received, the radar target of the nearest frame is taken as the first frameiA radar target corresponding to the frame video image;
when the first reference frame rateiWhen the frame has video image and radar target is received, the received radar target is taken as the first frameiRadar targets corresponding to the frame video images are updated and track caching is carried out on the radar targets according to the radar target tracking IDs;
And when the continuous multiframe does not receive the radar target, indicating that the radar target disappears, and finishing the frame rate alignment of the radar target track and the video stream.
When the radar detects a target, the radar target is transmitted at a stable frequency, and when the radar does not detect the target, the radar target is not transmitted, and the frame rate of the video stream acquired by the vision sensor is stable, so the frame rate of the video stream is taken as a reference frame rate. When a video image exists at a certain moment but no target is detected by the radar, the nearest radar target is taken as the radar target corresponding to the video image at the moment. For example, as shown in fig. 2, when no radar target is received in the 2 nd frame of video image, the radar target with the reference number 1 is taken as the radar target corresponding to the 2 nd frame of video image.
The time stamp of the radar receiving the target reflected wave signal is used as the time stamp of the radar target, and the time stamp of the camera imaging is used as the time stamp of the video image of the current frame. In step 3, time synchronization is performed on the radar target track and the video stream, which specifically includes:
finding out a radar target nearest to each frame of video image in time from the radar target track; calculating a time difference between a time stamp of each frame of video image and a time stamp of a corresponding nearest radar target; and when the time difference is smaller than the time threshold, updating the radar target according to the movement speed of the radar target, and realizing the time synchronization of the radar target track and the video stream.
In step 4, the target detection model adopts an existing model, such as a YOLOv5 model, firstly trains and tests the YOLOv5 model, and then uses the trained YOLOv5 model to perform target detection on the video stream to obtain video target information. And generating an associated area by utilizing the radar target information and the video target information after frame rate alignment and time synchronization, wherein the radar target information and the video target information comprise the number of targets, the positions of the targets and the motion state of the targets. The millimeter wave radar can detect a radially moving target, so that radar target information detected by the radar contains a target motion state, and the video target motion state cannot be obtained through a target detection model and needs to be determined according to a video target track. In this embodiment, determining a motion state of a video object according to a video object track specifically includes:
For each video target track, extracting a target frame from the video target track every interval of time, and calculating the intersection ratio of two adjacent target frames; comparing each cross-over ratio with a cross-over ratio threshold value to obtain the number of the cross-over ratios larger than the cross-over ratio threshold value; calculating the ratio between the number of which the cross ratio is larger than the cross ratio threshold value and the total number of the cross ratios; when the ratio is greater than the ratio threshold, the video target is in a static state; and when the ratio is smaller than or equal to the ratio threshold, the video target is in a motion state.
Each video target corresponds to a video target track, a frame of video image is extracted from the video target track every a period of time, and the frame of video image is input into a trained YOLOv5 model, so that a target frame of the video target can be obtained. For example, if 21 target frames are extracted from a certain video target track, 20 cross ratios are calculated, if 12 cross ratios in the 20 cross ratios are greater than a cross ratio threshold, the ratio is equal to 12/20, when the ratio of 12/20 is greater than the ratio threshold, the video target is considered to be in a static state, and when the ratio of 12/20 is less than or equal to the ratio threshold, the video target is considered to be in a moving state. In this embodiment, the cross ratio threshold is set to 0.8, the ratio threshold is set to 0.7, and the greater the ratio threshold is set, the more accurate the motion state detection result.
The associated region refers to a region in the pixel coordinate system that describes the same region as a region in the radar coordinate system. And discarding targets of the non-associated region, and generating a set of calibration points only in the associated region, thereby ensuring the accuracy of the selection of the calibration points and improving the acquisition efficiency of the calibration points. As shown in fig. 3, in step 5, an association area is generated according to the radar target information and the video target information after the frame rate alignment and the time synchronization, which specifically includes:
step 5.1: determining a plurality of pairs of associated targets according to the radar target information and the video target information;
step 5.2: when the number of the associated targets is larger than a first number threshold, selecting the associated target with the highest confidence from the plurality of pairs of associated targets; the associated target with the highest confidence coefficient is the video target with the highest confidence coefficient in the associated target;
step 5.3: and generating an association region according to the association target with the highest confidence, wherein the association region comprises a detection region of the video target with the highest confidence and a detection region of the radar target corresponding to the video target.
The detection area of the video object is a rectangular area determined according to the upper and lower rims of the object frame of the video object, and the detection area of the radar object is a rectangular area determined according to the upper and lower boundaries of the actual size of the radar object, such as the grid area in fig. 4. The target detection model detects a video target and outputs a detection result, namely, when a target frame is output, the confidence coefficient of the target frame is also output, for example, in the pixel coordinate system of fig. 4, the number carried by each rectangular frame is the confidence coefficient of the target frame, and the number 0.9 is the highest confidence coefficient, so that the detection area of the video target in the association area is a rectangular area determined by the upper and lower boundaries of the target frame corresponding to 0.9.
In step 5.1, determining a plurality of pairs of associated targets according to the radar target information and the video target information, specifically including:
step 5.11: judging whether the number of radar targets is equal to the number of moving video targets or not;
step 5.12: when the number of radar targets is equal to the number of moving video targets, judging whether the relative positions of the radar targets under a radar coordinate system are the same as the relative positions of the moving video targets under a pixel coordinate system;
step 5.13: when the relative position of the radar target under the radar coordinate system is the same as the relative position of the moving video target under the pixel coordinate system, judging whether the moving direction of the radar target under the radar coordinate system is the same as the moving direction of the moving video target under the pixel coordinate system;
step 5.14: and when the moving direction of the radar target under the radar coordinate system is the same as the moving direction of the moving video target under the pixel coordinate system, determining that the radar target and the video target are a pair of associated targets.
In fig. 4, there are 4 pairs of associated targets, where the confidence of the target frame corresponding to the number 0.9 is the highest, so the associated region includes the region determined by the target frame of the video target with the confidence of 0.9 and the region determined by the target size of the radar target corresponding to the video target with the confidence of 0.9, and the video target with the confidence of 0.9 and the corresponding radar target are a pair of associated targets.
In another embodiment of the present invention, the association region may also be generated manually, mainly in the following two ways:
first kind: a method for manual field measurement. Firstly, drawing an area block under a pixel coordinate system, then measuring coordinate points of a radar coordinate system corresponding to corner points of the area block to obtain the area block of the radar coordinate system, and then determining rectangular areas according to the pixel coordinate system and the upper and lower boundaries of the area block of the radar coordinate system to form a correlation area.
Second kind: according to a method of object suspension in a coordinate system. When the radar coordinate system detects a target, the target is displayed in the radar coordinate system, meanwhile, the target can be observed by human eyes in a video picture, at the moment, the time of the target in the radar coordinate system and the pixel coordinate system is paused, then, an area block containing the video target can be drawn under the pixel coordinate system manually, meanwhile, the area block containing the radar target is drawn under the radar coordinate system manually, and then, a rectangular area is determined according to the upper boundary and the lower boundary of the area block of the pixel coordinate system and the radar coordinate system, so that a correlation area is formed.
In step 6, a set of target points is generated according to the association area and the target tracking ID, specifically including:
Step 6.1: at the current moment, when radar targets and video targets appear in the associated area, the number of the radar targets is the same as that of the moving video targets, the relative positions of the radar targets under a radar coordinate system are the same as those of the moving video targets under a pixel coordinate system, and the radar targets in the associated area are associated with the video targets according to the radar target tracking ID and the video target tracking ID to obtain a pair of associated tracking targets; each pair of associated tracking targets comprises a radar target and a video target, and the relative position of the radar target under the radar coordinate system is the same as the relative position of the video target under the pixel coordinate system;
step 6.2: for each pair of associated tracking targets, reserving a radar target track and a video target track of the associated tracking targets in a time period from the occurrence of the targets to the disappearance of the targets;
step 6.3: respectively carrying out mean value filtering processing on the radar target track and the video target track of the associated tracking target according to time intervals, improving the accuracy of the marked point in the track, and taking the radar target track and the video target track of the associated tracking target after the filtering processing as a marked point track at the current moment; storing all the marked point tracks at the current moment into a marked point collection;
Step 6.4: judging whether the number of the target point tracks in the target point collection reaches a second number threshold, and if so, outputting the target point collection; otherwise, the same principle as in the steps 6.1-6.3 is adopted, all the calibration point tracks at the next moment are obtained, all the calibration point estimated tracks at the next moment are stored in the calibration point set until the number of the calibration point tracks in the calibration point set reaches a second number threshold.
In step 6.1, at a certain moment, a radar target and a video target simultaneously appear in the association area, and the radar target and the video target are 1, so that the radar target and the video target do not need to be associated according to the relative position and the target tracking ID; if the radar target and the video target are both n and n is a positive integer greater than 1, it cannot be known which radar target is associated with which video target, and therefore the radar target and the video target need to be associated according to the relative position and the target tracking ID. As shown in fig. 5, the radar target and the video target are 2, the IDs of the 2 radar targets are ID2 and ID5, the IDs of the 2 video targets are ID18 and ID25, respectively, it is impossible to know which radar target is associated with which video target, and it is possible to further determine that the radar target with ID2 is associated with the video target with ID18, and the radar target with ID5 is associated with the video target with ID25, according to the relative positions, so as to obtain 2 pairs of associated tracking targets. Each associated tracking target comprises a radar target and a video target, and the relative position of the radar target in the radar coordinate system is the same as the relative position of the video target in the pixel coordinate system.
As can be seen from fig. 5, each calibration point track actually includes one radar target track and one video target track, in order to ensure a sufficient number of calibration points, the number of calibration point tracks in the calibration point set needs to satisfy a second number threshold, which in this embodiment is set to 20, that is, when 20 or more calibration point tracks are included in the calibration point set, the rotation matrix and the translation matrix are determined according to the calibration point set.
In step 7, determining a rotation matrix and a translation matrix between the radar and the vision sensor according to the set of calibration points, which specifically includes:
step 7.1: calibrating the camera by using a Zhang Zhengyou calibration algorithm to obtain an internal reference matrix of the camera;
step 7.2: clustering all the calibration points in the calibration point set according to the distance under a radar coordinate system to obtain different clusters;
step 7.3: randomly selecting a target point from each class cluster as a current wheel target point, wherein each target point comprises a radar target point and a video target point;
step 7.4: calculating a rotation matrix and a translation matrix between the radar and the vision sensor according to all the current wheel standard points;
step 7.5: calculating the current calibration precision according to the internal reference matrix, the rotation matrix and the translation matrix;
Step 7.6: when the current calibration precision is greater than the precision threshold, outputting a rotation matrix and a translation matrix corresponding to the current calibration precision; and when the current calibration precision is smaller than or equal to the precision threshold value, turning to step 7.3.
In step 7.2, the number of clusters C is 6, and 6 clusters after clustering are shown in fig. 6. In step 7.4, according to the 6 current wheel set Points, a rotation matrix and a translation matrix between the radar and the vision sensor are calculated by using a PnP (transparent-n-Points) algorithm, and other methods can be used to solve, for example, a neural network. The conversion relation between the radar coordinate system and the pixel coordinate system is as follows:
(1)
wherein,for a certain video object point in pixel coordinate system, < >>For a certain radar target point in the radar coordinate system,Ris a rotation matrix between the radar coordinate system and the pixel coordinate system,Tis a translation matrix between the radar coordinate system and the pixel coordinate system,Kis an internal reference matrix of the camera, Z c Is the Z-axis value of the video object point in the camera coordinate system.
In order to obtain optimal calibration accuracy, the calibration accuracy of the rotation torque matrix and the translation matrix is evaluated by utilizing the calibration point set, and when the accuracy threshold is met, the rotation matrix and the translation matrix corresponding to the calibration accuracy are output as the optimal calibration matrix, so that the calibration accuracy is greatly improved. In step 7.5, the specific calculation process of the current calibration precision is as follows:
Step 7.51: according to the internal reference matrix, the rotation matrix and the translation matrix, mapping the radar target point of each calibration point in the calibration point set to a pixel coordinate system to obtain a radar target mapping point. That is, each of the calibration points P is calculated according to the formula (1) i The radar target point in (2) is mapped to the coordinates of a pixel coordinate system, and each target point in the pixel coordinate system corresponds to a target frame, so the radar target mapping point is +.>There is also a corresponding target frame.
Step 7.52: calculating radar target mapping pointsTarget frame of (2) and the target point P i The cross-over ratio of the target frames of the video target points of (a) is defined as the target frame mapping accuracy (TBMA, target Box Mapping Accuracy).
Step 7.53: based on the internal reference matrix, the rotation matrix and the translation matrix, each of the calibration points P in the calibration point set i The video target point of (2) is mapped to a radar coordinate system to obtain a video target mapping point. That is, each of the calibration points P is calculated according to the formula (1) i The video target point in (c) is mapped to the coordinates of the radar coordinate system.
Step 7.54: calculating video target mapping pointsWith a marked point P i Euclidean distance between radar target points of (c)d i The specific formula is as follows:
(2)
step 7.55: according to Euclidean distance d i And a distance thresholdDObtaining the distance mapping precision TRMA i The specific formula is as follows:
(3)
step 7.56: TRMA according to the mapping precision of the cross-over ratio and the distance i The current calibration precision is calculated, and the specific formula is as follows:
(4)
wherein A is T Is the firstTThe calibration accuracy of the wheel is achieved,Nto index the number of points in the point set, TBMA i To set the first point of the index setiEach index point P i Target frame of radar target mapping point and target frame of video target pointCross-over ratio, TRMA i To set the first point of the index setiEach index point P i Distance mapping accuracy of (a).
In step 7.6, the precision threshold is set to 0.9 to 0.98. When the current calibration precision is compared with the precision threshold, the current calibration precision is required to be larger than the precision threshold within the set time or the set iteration times, and if not, the calibration fails. I.e. a plurality of iterations or the accuracy threshold still cannot be exceeded within a set time, indicating that the calibration has failed. In this embodiment, the set time is 15 to 30 minutes.
The invention can be applied to different scenes, such as the traffic field, by adopting different types of radar in combination with vision sensors.
The embodiment of the invention also provides electronic equipment, which comprises: a processor and a memory storing a computer program, the processor being configured to implement the Lei Shibiao method as described above when executing the computer program.
Although not shown, the electronic device includes a processor that can perform various appropriate operations and processes according to programs and/or data stored in a Read Only Memory (ROM) or programs and/or data loaded from a storage portion into a Random Access Memory (RAM). The processor may be a multi-core processor or may include a plurality of processors. In some embodiments, the processor may comprise a general-purpose main processor and one or more special coprocessors, such as, for example, a Central Processing Unit (CPU), a Graphics Processor (GPU), a neural Network Processor (NPU), a Digital Signal Processor (DSP), and so forth. In the RAM, various programs and data required for the operation of the electronic device are also stored. The processor, ROM and RAM are connected to each other by a bus. An input/output (I/O) interface is also connected to the bus.
The above-described processor is used in combination with a memory to execute a program stored in the memory, which when executed by a computer is capable of implementing the methods, steps or functions described in the above-described embodiments.
Although not shown, embodiments of the present invention also provide a computer-readable storage medium having a computer program stored thereon, which when executed by a processor, implements the Lei Shibiao method described above.
Storage media in embodiments of the invention include both permanent and non-permanent, removable and non-removable items that may be used to implement information storage by any method or technology. Examples of storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, read only compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape storage or other magnetic storage devices, or any other non-transmission medium which can be used to store information that can be accessed by a computing device.
The foregoing disclosure is merely illustrative of specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art will readily recognize that changes and modifications are possible within the scope of the present invention.

Claims (10)

1. A Lei Shibiao calibration method based on track association, which is characterized by comprising the following steps:
Acquiring radar target information detected by a radar, establishing a radar target tracker according to the radar target information, and acquiring a radar target track in real time according to the radar target tracker;
acquiring a video stream detected by a visual sensor;
performing frame rate alignment and time synchronization on radar target information in the radar target track and video images in the video stream;
obtaining video target information according to video images and target detection models in the video stream, establishing a video target tracker according to the video target information, and acquiring a video target track in real time according to the video target tracker;
generating an associated area according to the radar target information after frame rate alignment and time synchronization and the video target information of the video image;
generating a set of target points according to the association region and the target tracking ID;
determining a rotation matrix and a translation matrix between the radar and the vision sensor according to the calibration point set, so as to realize the radar calibration;
the generating a set of target points according to the association area and the target tracking ID specifically includes:
at the current moment, when radar targets and video targets appear in the association area, the number of the radar targets is the same as that of the moving video targets, the relative positions of the radar targets under a radar coordinate system are the same as those of the moving video targets under a pixel coordinate system, and the radar targets in the association area are associated with the video targets according to the radar target tracking ID and the video target tracking ID to obtain a pair of association tracking targets; each pair of associated tracking targets comprises a radar target and a video target, and the relative position of the radar target under a radar coordinate system is the same as the relative position of the video target under a pixel coordinate system;
For each pair of associated tracking targets, reserving a radar target track and a video target track of the associated tracking targets in a period from the appearance of the targets to the disappearance of the targets;
respectively carrying out filtering treatment on the radar target track and the video target track of the associated tracking target, and taking the radar target track and the video target track of the associated tracking target after the filtering treatment as a calibration point track at the current moment; storing all the marked point tracks at the current moment into a marked point collection;
judging whether the number of the target point tracks in the target point collection reaches a second number threshold, and if so, outputting the target point collection; otherwise, all the target point tracks at the next moment are obtained, and all the target point tracks at the next moment are stored in the target point collection until the number of the target point tracks in the target point collection reaches a second number threshold.
2. The method of claim Lei Shibiao, wherein frame rate aligning the radar target trajectory with the video stream, comprises:
taking the frame rate of the video stream as a reference frame rate;
when the first reference frame rateiWhen the frame has video image and radar target is not received, the radar target of the nearest frame is taken as the first frame iA radar target corresponding to the frame video image;
when the first reference frame rateiWhen the frame has video image and radar target is received, the received radar target is taken as the first frameiThe method comprises the steps that a radar target corresponding to a frame video image is updated and track caching is carried out on the radar target according to a radar target tracking ID;
and when the continuous multiframe does not receive the radar target, completing the frame rate alignment of the radar target track and the video stream.
3. The method of claim Lei Shibiao, wherein time synchronizing the radar target trajectory with the video stream, comprises:
finding out a radar target nearest to each frame of video image in time from the radar target track;
calculating a time difference between a time stamp of each frame of video image and a time stamp of a corresponding nearest radar target;
and when the time difference is smaller than a time threshold, updating the radar target according to the movement speed of the radar target, and realizing time synchronization of the radar target track and the video stream.
4. The method of claim Lei Shibiao, wherein generating an association region from the frame rate aligned and time synchronized radar target information and video target information, specifically comprises:
Determining a plurality of pairs of associated targets according to the radar target information and the video target information;
when the number of the associated targets is larger than a first number threshold, selecting the associated target with the highest confidence from a plurality of pairs of the associated targets; the associated target with the highest confidence coefficient is the video target with the highest confidence coefficient in the associated target;
and generating an association region according to the association target with the highest confidence, wherein the association region comprises a detection region of the video target with the highest confidence and a detection region of the radar target corresponding to the video target.
5. The method of claim Lei Shibiao, wherein the radar target information and the video target information comprise a target number, a target position and a target motion state, and wherein determining pairs of associated targets based on the radar target information and the video target information comprises:
judging whether the number of radar targets is equal to the number of moving video targets or not;
when the number of radar targets is equal to the number of moving video targets, judging whether the relative positions of the radar targets under a radar coordinate system are the same as the relative positions of the moving video targets under a pixel coordinate system;
When the relative position of the radar target under the radar coordinate system is the same as the relative position of the moving video target under the pixel coordinate system, judging whether the moving direction of the radar target under the radar coordinate system is the same as the moving direction of the moving video target under the pixel coordinate system;
and when the moving direction of the radar target under the radar coordinate system is the same as the moving direction of the moving video target under the pixel coordinate system, determining that the radar target and the video target are a pair of associated targets.
6. The method of claim Lei Shibiao, wherein determining the video object motion state from the video object trajectory comprises:
for each video target track, extracting a target frame from the video target track every a period of time, and calculating the intersection ratio of two adjacent target frames;
comparing each cross-over ratio with a cross-over ratio threshold value to obtain the number of the cross-over ratios larger than the cross-over ratio threshold value;
calculating the ratio between the number of which the cross ratio is larger than the cross ratio threshold value and the total number of the cross ratios;
when the ratio is greater than a ratio threshold, the video target is in a static state; and when the ratio is smaller than or equal to a ratio threshold, the video target is in a motion state.
7. The Lei Shibiao method according to any one of claims 1 to 6, wherein determining a rotation matrix and a translation matrix between the radar and the vision sensor from the set of calibration points comprises:
step 7.1: calibrating the visual sensor to obtain an internal reference matrix of the visual sensor;
step 7.2: clustering all the calibration points in the calibration point set according to the distance under a radar coordinate system to obtain different clusters;
step 7.3: randomly selecting a target point from each class cluster as a current wheel target point, wherein each target point comprises a radar target point and a video target point;
step 7.4: calculating a rotation matrix and a translation matrix between the radar and the vision sensor according to all the current wheel standard points;
step 7.5: calculating the current calibration precision according to the internal reference matrix, the rotation matrix and the translation matrix;
step 7.6: when the current calibration precision is larger than the precision threshold, outputting a rotation matrix and a translation matrix corresponding to the current calibration precision; and when the current calibration precision is smaller than or equal to the precision threshold, switching to the step 7.3.
8. The method of claim Lei Shibiao, wherein the specific calculation of the current calibration accuracy comprises:
According to the internal reference matrix, the rotation matrix and the translation matrix, mapping the radar target point of each calibration point in the calibration point set to a pixel coordinate system to obtain a radar target mapping point;
calculating the intersection ratio of a target frame of a radar target mapping point and a target frame of a video target point of the target point;
according to the internal reference matrix, the rotation matrix and the translation matrix, mapping the video target point of each calibration point in the calibration point set to a radar coordinate system to obtain a video target mapping point;
calculating the Euclidean distance between the video target mapping point and the radar target point of the target point;
obtaining the distance mapping precision according to the Euclidean distance and the distance threshold value;
calculating the current calibration precision according to the intersection ratio and the distance mapping precision, wherein the specific formula is as follows:
wherein A is T Is the firstTThe calibration accuracy of the wheel is achieved,Nto index the number of points in the point set, TBMA i To set the first point of the index setiThe intersection ratio of the target frame of the radar target mapping point of each target point and the target frame of the video target point, TRMA i To set the first point of the index setiThe distance mapping accuracy of the individual calibration points,Das a threshold value of the distance,d i to set the first point of the index set iEuclidean distance between video target mapping points and radar target points of the calibration points.
9. An electronic device, the device comprising:
a memory for storing a computer program;
a processor configured to implement the method of Lei Shibiao as claimed in any one of claims 1 to 8 when executing the computer program.
10. A computer readable storage medium having a computer program stored thereon, wherein the computer program when executed by a processor implements the method of Lei Shibiao according to any one of claims 1 to 8.
CN202311466239.3A 2023-11-07 2023-11-07 Lei Shibiao method, apparatus and storage medium Active CN117197182B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311466239.3A CN117197182B (en) 2023-11-07 2023-11-07 Lei Shibiao method, apparatus and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311466239.3A CN117197182B (en) 2023-11-07 2023-11-07 Lei Shibiao method, apparatus and storage medium

Publications (2)

Publication Number Publication Date
CN117197182A CN117197182A (en) 2023-12-08
CN117197182B true CN117197182B (en) 2024-02-27

Family

ID=88998327

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311466239.3A Active CN117197182B (en) 2023-11-07 2023-11-07 Lei Shibiao method, apparatus and storage medium

Country Status (1)

Country Link
CN (1) CN117197182B (en)

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB1291130A (en) * 1968-10-18 1972-09-27 Hollandse Signaalapparaten Bv Radar system for three-dimensional target position finding
US4688046A (en) * 1982-09-13 1987-08-18 Isc Cardion Electronics, Inc. ADF bearing and location for use with ASR and ASDE displays
CA2424151A1 (en) * 2002-04-02 2003-10-02 Thales Nederland B.V. Multi-targets detection method applied in particular to surveillance radars with multi-beamforming in elevation
CN102169177A (en) * 2011-01-21 2011-08-31 西安电子科技大学 Time-domain-characteristic-based method for identifying high-resolution range profile of radar target
CN103323847A (en) * 2013-05-30 2013-09-25 中国科学院电子学研究所 Moving target trace point simulating and track associating method and device
CN104297748A (en) * 2014-10-20 2015-01-21 西安电子科技大学 Before-radar-target-detecting tracking method based on track enhancing
CN105116392A (en) * 2015-09-09 2015-12-02 电子科技大学 AIS and active radar flight path fusion and recognition method
CN105701479A (en) * 2016-02-26 2016-06-22 重庆邮电大学 Intelligent vehicle multi-laser radar fusion recognition method based on target features
FR3031192A1 (en) * 2014-12-30 2016-07-01 Thales Sa RADAR ASSISTED OPTICAL MONITORING METHOD AND MISSION SYSTEM FOR PROCESSING METHOD
CN110515073A (en) * 2019-08-19 2019-11-29 南京慧尔视智能科技有限公司 The trans-regional networking multiple target tracking recognition methods of more radars and device
WO2020108647A1 (en) * 2018-11-30 2020-06-04 杭州海康威视数字技术股份有限公司 Target detection method, apparatus and system based on linkage between vehicle-mounted camera and vehicle-mounted radar
WO2021170030A1 (en) * 2020-02-28 2021-09-02 华为技术有限公司 Method, device, and system for target tracking
CN113671480A (en) * 2021-07-10 2021-11-19 亿太特(陕西)科技有限公司 Radar and video fusion traffic target tracking method, system, equipment and terminal
CN114299417A (en) * 2021-12-09 2022-04-08 连云港杰瑞电子有限公司 Multi-target tracking method based on radar-vision fusion
CN115619873A (en) * 2022-09-21 2023-01-17 连云港杰瑞电子有限公司 Track tracing-based radar vision automatic calibration method
CN115685102A (en) * 2022-09-21 2023-02-03 连云港杰瑞电子有限公司 Target tracking-based radar vision automatic calibration method
CN115731268A (en) * 2022-11-17 2023-03-03 东南大学 Unmanned aerial vehicle multi-target tracking method based on visual/millimeter wave radar information fusion
CN115965655A (en) * 2023-02-02 2023-04-14 西安电子科技大学 Traffic target tracking method based on radar-vision integration
CN116068504A (en) * 2021-10-29 2023-05-05 杭州海康威视数字技术股份有限公司 Calibration method, device and equipment for radar and video acquisition equipment and storage medium
CN116990768A (en) * 2023-08-02 2023-11-03 南京慧尔视软件科技有限公司 Predicted track processing method and device, electronic equipment and readable medium

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB1291130A (en) * 1968-10-18 1972-09-27 Hollandse Signaalapparaten Bv Radar system for three-dimensional target position finding
US4688046A (en) * 1982-09-13 1987-08-18 Isc Cardion Electronics, Inc. ADF bearing and location for use with ASR and ASDE displays
CA2424151A1 (en) * 2002-04-02 2003-10-02 Thales Nederland B.V. Multi-targets detection method applied in particular to surveillance radars with multi-beamforming in elevation
CN102169177A (en) * 2011-01-21 2011-08-31 西安电子科技大学 Time-domain-characteristic-based method for identifying high-resolution range profile of radar target
CN103323847A (en) * 2013-05-30 2013-09-25 中国科学院电子学研究所 Moving target trace point simulating and track associating method and device
CN104297748A (en) * 2014-10-20 2015-01-21 西安电子科技大学 Before-radar-target-detecting tracking method based on track enhancing
FR3031192A1 (en) * 2014-12-30 2016-07-01 Thales Sa RADAR ASSISTED OPTICAL MONITORING METHOD AND MISSION SYSTEM FOR PROCESSING METHOD
CN105116392A (en) * 2015-09-09 2015-12-02 电子科技大学 AIS and active radar flight path fusion and recognition method
CN105701479A (en) * 2016-02-26 2016-06-22 重庆邮电大学 Intelligent vehicle multi-laser radar fusion recognition method based on target features
WO2020108647A1 (en) * 2018-11-30 2020-06-04 杭州海康威视数字技术股份有限公司 Target detection method, apparatus and system based on linkage between vehicle-mounted camera and vehicle-mounted radar
CN110515073A (en) * 2019-08-19 2019-11-29 南京慧尔视智能科技有限公司 The trans-regional networking multiple target tracking recognition methods of more radars and device
WO2021031338A1 (en) * 2019-08-19 2021-02-25 南京慧尔视智能科技有限公司 Multiple object tracking and identification method and apparatus based on multi-radar cross-regional networking
WO2021170030A1 (en) * 2020-02-28 2021-09-02 华为技术有限公司 Method, device, and system for target tracking
CN113671480A (en) * 2021-07-10 2021-11-19 亿太特(陕西)科技有限公司 Radar and video fusion traffic target tracking method, system, equipment and terminal
CN116068504A (en) * 2021-10-29 2023-05-05 杭州海康威视数字技术股份有限公司 Calibration method, device and equipment for radar and video acquisition equipment and storage medium
CN114299417A (en) * 2021-12-09 2022-04-08 连云港杰瑞电子有限公司 Multi-target tracking method based on radar-vision fusion
CN115619873A (en) * 2022-09-21 2023-01-17 连云港杰瑞电子有限公司 Track tracing-based radar vision automatic calibration method
CN115685102A (en) * 2022-09-21 2023-02-03 连云港杰瑞电子有限公司 Target tracking-based radar vision automatic calibration method
CN115731268A (en) * 2022-11-17 2023-03-03 东南大学 Unmanned aerial vehicle multi-target tracking method based on visual/millimeter wave radar information fusion
CN115965655A (en) * 2023-02-02 2023-04-14 西安电子科技大学 Traffic target tracking method based on radar-vision integration
CN116990768A (en) * 2023-08-02 2023-11-03 南京慧尔视软件科技有限公司 Predicted track processing method and device, electronic equipment and readable medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于激光雷达+视频的城际铁路周界入侵监测报警技术研究;栗文韬等;《铁道运输与经济》;第45卷(第07期);全文 *

Also Published As

Publication number Publication date
CN117197182A (en) 2023-12-08

Similar Documents

Publication Publication Date Title
US11954813B2 (en) Three-dimensional scene constructing method, apparatus and system, and storage medium
CN105608417B (en) Traffic lights detection method and device
CN111340855A (en) Road moving target detection method based on track prediction
CN105374049B (en) Multi-corner point tracking method and device based on sparse optical flow method
CN106128121A (en) Vehicle queue length fast algorithm of detecting based on Local Features Analysis
CN113128434B (en) Method for carrying out 3D target detection on monocular RGB image
CN112150448B (en) Image processing method, device and equipment and storage medium
CN111723778B (en) Vehicle distance measuring system and method based on MobileNet-SSD
CN112115913B (en) Image processing method, device and equipment and storage medium
CN108596032B (en) Detection method, device, equipment and medium for fighting behavior in video
CN114217665A (en) Camera and laser radar time synchronization method, device and storage medium
CN114926808A (en) Target detection and tracking method based on sensor fusion
CN114758504A (en) Online vehicle overspeed early warning method and system based on filtering correction
CN115376109A (en) Obstacle detection method, obstacle detection device, and storage medium
CN106558069A (en) A kind of method for tracking target and system based under video monitoring
CN113970734A (en) Method, device and equipment for removing snowing noise of roadside multiline laser radar
Dehghani et al. Single camera vehicles speed measurement
CN105957060B (en) A kind of TVS event cluster-dividing method based on optical flow analysis
CN113989755A (en) Method, apparatus and computer readable storage medium for identifying an object
CN117197182B (en) Lei Shibiao method, apparatus and storage medium
CN112099004B (en) Airborne interferometric synthetic aperture radar complex scene elevation inversion method and system
CN106874837A (en) A kind of vehicle checking method based on Computer Vision
CN112183378A (en) Road slope estimation method and device based on color and depth image
CN113916213A (en) Positioning method, positioning device, electronic equipment and computer readable storage medium
CN115994934B (en) Data time alignment method and device and domain controller

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant