CN117670939B - Multi-camera multi-target tracking method and device, storage medium and electronic equipment - Google Patents
Multi-camera multi-target tracking method and device, storage medium and electronic equipment Download PDFInfo
- Publication number
- CN117670939B CN117670939B CN202410132014.2A CN202410132014A CN117670939B CN 117670939 B CN117670939 B CN 117670939B CN 202410132014 A CN202410132014 A CN 202410132014A CN 117670939 B CN117670939 B CN 117670939B
- Authority
- CN
- China
- Prior art keywords
- track
- local
- target
- tracks
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 94
- 238000006243 chemical reaction Methods 0.000 claims abstract description 8
- 230000009466 transformation Effects 0.000 claims description 15
- 239000011159 matrix material Substances 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 10
- 230000004927 fusion Effects 0.000 claims description 6
- 238000012216 screening Methods 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 description 15
- 238000004458 analytical method Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 6
- 238000001514 detection method Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 238000013507 mapping Methods 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002035 prolonged effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
Abstract
The application provides a multi-camera multi-target tracking method, a multi-camera multi-target tracking device, a storage medium and electronic equipment, and relates to the technical field of computers, wherein the multi-camera multi-target tracking method comprises the following steps: acquiring a plurality of local tracks in a target area acquired by a first camera and a second camera; coordinate conversion is carried out on the plurality of local tracks to obtain coordinate values corresponding to each local track in the plurality of local tracks under a target coordinate system, and track similarity between any two local tracks is calculated based on the coordinate values corresponding to each local track in the plurality of local tracks; and updating the global track of the target object based on the first track and the second track under the condition that the track similarity of the first track and the second track in the plurality of local tracks meets the preset similarity. The multi-camera multi-target tracking method, the multi-camera multi-target tracking device, the storage medium and the electronic equipment are used for improving the success rate of cross-camera track matching in the multi-camera multi-target tracking method.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to a multi-camera multi-target tracking method, device, storage medium, and electronic apparatus.
Background
Multi-target tracking (Multiple Object Tracking, MOT) is a key technology in the field of computer vision, and is widely applied in the fields of video monitoring, security protection and the like. In the multi-target tracking process, the multi-target tracking system can be divided into: a single-pass multi-target tracking system and a cross-camera multi-target tracking system. The single-path multi-target tracking system is used for tracking a target object in a visual angle range of a camera, and in the actual monitoring field, a security scene is often designed to continuously track the target object under a plurality of lenses in order to have a larger monitoring range, so that the target object needs to be monitored by a plurality of cameras in the tracking process.
In the related art, a track-to-track (tracklet-to-tracklet) matching method is mostly adopted for cross-camera track matching, however, in the case that video frame times of two cameras are not synchronous, the probability of matching failure is high.
Based on this, there is an urgent need for a multi-camera tracking method to improve the success rate of cross-camera trajectory matching in a multi-camera multi-target tracking system performing the multi-target tracking method.
Disclosure of Invention
The application aims to provide a multi-camera multi-target tracking method, a multi-camera multi-target tracking device, a storage medium and electronic equipment, which are used for improving the success rate of cross-camera track matching in the multi-camera multi-target tracking method.
The application provides a multi-target tracking method of a multi-camera, which comprises the following steps:
Acquiring a plurality of local tracks of at least one object to be identified, which are acquired by a first camera and a second camera, in a target area; the first camera and the second camera are cameras with overlapping areas at any two angles of view of the plurality of cameras; the target area is an overlapping area of angles of view of the first camera and the second camera; performing coordinate conversion on the plurality of local tracks based on a target coordinate system corresponding to the target region to obtain coordinate values corresponding to each local track in the plurality of local tracks under the target coordinate system, and calculating track similarity between any two local tracks based on the coordinate values corresponding to each local track in the plurality of local tracks; under the condition that the track similarity of a first track and a second track in the plurality of local tracks meets the preset similarity, determining the first track and the second track as tracks of target objects, and updating global tracks of the target objects based on the first track and the second track; wherein the target object is an object in the at least one object to be identified; the track similarity between any two local tracks is as follows: and calculating based on the shape similarity of the target frames of any two local tracks and the area intersection ratio of the target frames.
Optionally, the calculating the track similarity between any two local tracks based on the coordinate value corresponding to each local track in the plurality of local tracks includes: calculating the shape similarity of the target frame and the area intersection ratio of the target frame of any two local tracks in the plurality of local tracks based on the coordinate value corresponding to each local track; the local trajectory includes: a plurality of historical tracking locations; the target frame comprises: a plurality of circumscribed rectangular frames corresponding to each of the plurality of history tracking positions; a history tracking position corresponds to an external rectangular frame; calculating the corresponding track similarity based on the shape similarity of the target frames of any two local tracks and the area intersection ratio of the target frames; and determining two local tracks with track similarity larger than a preset track similarity threshold and maximum track similarity as tracks of the same object to be identified.
Optionally, the calculating the corresponding track similarity based on the shape similarity of the target frame and the area intersection ratio of the target frame of any two local tracks includes: and calculating the sum of the similarity of sub-tracks corresponding to any two local tracks at each historical tracking position based on the shape similarity of the target frame and the area intersection ratio of the target frame of any two local tracks, so as to obtain the track similarity of any two local tracks.
Optionally, the calculating the shape similarity of the target frame of any two local trajectories of the plurality of local trajectories based on the coordinate values corresponding to each local trajectory includes: track alignment is carried out on the plurality of local tracks in the target coordinate system based on coordinate values of the plurality of historical tracking positions corresponding to each local track; acquiring the width and the height of a target frame of a first track to be matched and a second track to be matched in the plurality of local tracks, and calculating a first absolute value and a second absolute value; the first absolute value is the absolute value of the width of the target frame of the first track to be matched and the second track to be matched; the second absolute value is a high absolute value of a target frame of the first track to be matched and the second track to be matched; calculating a first ratio of the first absolute value to the width of the target track to be matched and a second ratio of the second absolute value to the height of the target track to be matched; calculating a first difference value of a target integer and the first ratio and a second difference value of the first difference value and the second ratio, and determining the second difference value as the shape similarity of the first track to be matched and the second track to be matched; the target track to be matched is any one of the first track to be matched and the second track to be matched.
Optionally, the shape similarity of any two local trajectories is calculated based on the following formula:
;
wherein, For the shape similarity of the local track a and the local track B, W represents the width of the circumscribed rectangular frame, H represents the height of the circumscribed rectangular frame, and a and B represent the two local tracks.
Optionally, the track similarity of any two local tracks is calculated based on the following formula:
;
wherein, Is the track similarity of the local track A and the local track B,/>And/>Is a weight coefficient, and/>/>The sum of (2) is 1; /(I)For the area intersection ratio of the local track A and the local track B,/>The shape similarity between the local track A and the local track B.
Optionally, the updating the global track of the target object based on the first track and the second track includes: updating a target tracker corresponding to the target object based on the track information of the first track or the track information of the second track; the target tracker is used for tracking the motion trail of the target object.
Optionally, after calculating the corresponding track similarity based on the shape similarity of any two local tracks and the area intersection ratio of the target frame, the method further includes: generating a predicted track corresponding to a first target local track based on a historical track of the first target local track under the condition that any first target local track in the plurality of local tracks is not matched with a local track with track similarity larger than the preset track similarity threshold; and calculating the track similarity of the predicted track and other local tracks except the first target local track in the plurality of local tracks based on the coordinate values corresponding to the predicted track under the target coordinate system.
Optionally, the generating, based on the historical track of the first target local track, a predicted track corresponding to the first target local track includes: and predicting the track of the target object to be identified corresponding to the first target local track within a future period of time based on the historical track of the first target local track to obtain a plurality of predicted tracks.
Optionally, the predicting, based on the historical track of the first target local track, a track of the target object to be identified corresponding to the first target local track in a future period of time, to obtain a plurality of predicted tracks includes: predicting the position of the target to be identified at the future moment based on the historical track of the first target local track to obtain a plurality of predicted position points; and generating a plurality of predicted tracks based on the position information corresponding to the plurality of predicted position points and the first target local track.
Optionally, the calculating the track similarity between the predicted track and the other local tracks except the first target local track in the plurality of local tracks includes: calculating the track similarity corresponding to each predicted track in the plurality of predicted tracks and other local tracks except the first target local track in the plurality of local tracks to obtain a plurality of candidate track similarity; and screening target candidate track similarity with highest track similarity from the plurality of candidate track similarity, and determining a local track corresponding to the target candidate track similarity and the first target local track as the track of the same object to be identified.
Optionally, the calculating the track similarity between the predicted track and the other local tracks except the first target local track in the plurality of local tracks includes: intercepting a first sub-track corresponding to a future time after the current time and a second sub-track corresponding to a history time before the current time in the predicted track, and fusing the first sub-track and the second sub-track to generate a target sub-track; the track length of the second sub-track is smaller than the track length of the first target local track; and calculating the track similarity of the target sub-track and other local tracks except the first target local track in the plurality of local tracks.
Optionally, the calculating the track similarity between the target sub-track and other local tracks except the first target local track in the plurality of local tracks includes: intercepting a third sub-track corresponding to a historical moment before the current moment in the second target local track; the track length of the third sub-track is smaller than the target sub-track; the second target local track is any local track except the first target local track in the plurality of local tracks; sequentially intercepting fourth sub-tracks with the same track length as the third sub-track from one end of the target sub-track to obtain a plurality of fourth sub-tracks, and calculating track similarity of the third sub-track and each fourth sub-track in the plurality of fourth sub-tracks to obtain a plurality of track similarity; and taking the track similarity with the highest similarity value in the track similarities as the track similarity between the predicted track and the second target local track.
Optionally, the updating the global track of the target object based on the first track and the second track includes: updating a target tracker corresponding to the target object based on the track information of the target candidate track; the target tracker is used for tracking the motion trail of the target object.
Optionally, before the acquiring the plurality of local trajectories of the at least one object to be identified acquired by the first camera and the second camera in the target area, the method further includes: performing feature point matching on images acquired by any two cameras, and calculating an image transformation matrix between the images acquired by any two cameras according to a matching result; image fusion is carried out on the images of any two cameras according to the image transformation matrix, whether an overlapping area exists in the angle of view between any two cameras is determined, and a calibration result is generated; the calibration result is used for indicating whether an overlapping area of the field angle exists between any two cameras.
Optionally, the acquiring a plurality of local trajectories of at least one object to be identified acquired by the first camera and the second camera in the target area includes: and judging whether an angle of view overlapping area exists between the first camera and the second camera based on the calibration result, and acquiring a plurality of local tracks of at least one object to be identified, which are acquired by the first camera and the second camera, in a target area under the condition that the angle of view overlapping area exists between the first camera and the second camera.
Optionally, the acquiring, in a case where it is determined that there is a field angle overlapping area between the first camera and the second camera, a plurality of local trajectories of at least one object to be identified acquired by the first camera and the second camera in the target area includes: determining region coordinate information of the target region based on the target coordinate system under the condition that a field angle overlapping region exists between the first camera and the second camera, and converting the region coordinate information into first coordinate information of a coordinate system corresponding to the first camera and second coordinate information of a coordinate system corresponding to the second camera; determining an overlap region in a field angle of the first camera based on the first coordinate information, and determining an overlap region in a field angle of the second camera based on the second coordinate information; the local track acquired by the first camera is acquired based on the overlapping area in the field angle of the first camera, and the local track acquired by the second camera is acquired based on the overlapping area in the field angle of the second camera.
Also provided is a multi-camera multi-target tracking apparatus, comprising:
The acquisition module is used for acquiring a plurality of local tracks of at least one object to be identified, which are acquired by the first camera and the second camera, in the target area; the first camera and the second camera are cameras with overlapping areas at any two angles of view of the plurality of cameras; the target area is an overlapping area of angles of view of the first camera and the second camera; the track matching module is used for carrying out coordinate conversion on the plurality of local tracks based on a target coordinate system corresponding to the target region to obtain coordinate values corresponding to each local track in the plurality of local tracks under the target coordinate system, and calculating track similarity between any two local tracks based on the coordinate values corresponding to each local track in the plurality of local tracks; the track updating module is used for determining a first track and a second track as tracks of a target object under the condition that track similarity of the first track and the second track in the plurality of local tracks meets preset similarity, and updating the global track of the target object based on the first track and the second track; wherein the target object is an object in the at least one object to be identified; the track similarity between any two local tracks is as follows: and calculating based on the shape similarity of the target frames of any two local tracks and the area intersection ratio of the target frames.
Optionally, the track matching module is specifically configured to calculate a shape similarity of a target frame and an area intersection ratio of the target frame of any two local tracks in the plurality of local tracks based on coordinate values corresponding to each local track; the local trajectory includes: a plurality of historical tracking locations; the target frame comprises: a plurality of circumscribed rectangular frames corresponding to each of the plurality of history tracking positions; a history tracking position corresponds to an external rectangular frame; the track matching module is specifically used for calculating the corresponding track similarity based on the shape similarity of the target frame of any two local tracks and the area intersection ratio of the target frame; the track matching module is specifically configured to determine two local tracks with track similarity greater than a preset track similarity threshold and the track similarity being the largest as tracks of the same object to be identified.
Optionally, the track matching module is specifically configured to calculate a sum of similarities of sub-tracks corresponding to each historical tracking position of any two local tracks based on a shape similarity of a target frame and an area intersection ratio of the target frame of any two local tracks, so as to obtain a track similarity of any two local tracks.
Optionally, the track matching module is specifically configured to perform track alignment on the multiple local tracks in the target coordinate system based on coordinate values of multiple historical tracking positions corresponding to each local track; the acquisition module is further used for acquiring the width and the height of a target frame of a first track to be matched and a second track to be matched in the plurality of local tracks, and calculating a first absolute value and a second absolute value; the first absolute value is the absolute value of the width of the target frame of the first track to be matched and the second track to be matched; the second absolute value is a high absolute value of a target frame of the first track to be matched and the second track to be matched; the track matching module is specifically configured to calculate a first ratio of the first absolute value to a width of a target track to be matched and a second ratio of the second absolute value to a height of the target track to be matched; the track matching module is specifically configured to calculate a first difference value between a target integer and the first ratio and a second difference value between the first difference value and the second ratio, and determine the second difference value as a shape similarity between the first track to be matched and the second track to be matched; the target track to be matched is any one of the first track to be matched and the second track to be matched.
Optionally, the shape similarity of any two local trajectories is calculated based on the following formula:
;
wherein, For the shape similarity of the local track a and the local track B, W represents the width of the circumscribed rectangular frame, H represents the height of the circumscribed rectangular frame, and a and B represent the two local tracks.
Optionally, the track similarity of any two local tracks is calculated based on the following formula:
;
wherein, Is the track similarity of the local track A and the local track B,/>And/>Is a weight coefficient, and/>/>The sum of (2) is 1; /(I)For the area intersection ratio of the local track A and the local track B,/>The shape similarity between the local track A and the local track B.
Optionally, the track updating module is specifically configured to update a target tracker corresponding to the target object based on track information of the first track or track information of the second track; the target tracker is used for tracking the motion trail of the target object.
Optionally, the track matching module is further configured to generate, based on a historical track of the first target local track, a predicted track corresponding to the first target local track when any one of the plurality of local tracks is not matched with a local track with a track similarity greater than the preset track similarity threshold; the track matching module is further configured to calculate track similarity between the predicted track and other local tracks except the first target local track in the plurality of local tracks based on coordinate values corresponding to the predicted track in the target coordinate system.
Optionally, the track matching module is specifically configured to predict a track of the target object to be identified corresponding to the first target local track within a future period of time based on the historical track of the first target local track, so as to obtain a plurality of predicted tracks.
Optionally, the track matching module is specifically configured to predict a position of the target to be identified at a future time based on a historical track of the first target local track, so as to obtain a plurality of predicted position points; the track matching module is specifically further configured to generate the plurality of predicted tracks based on position information corresponding to the plurality of predicted position points and the first target local track.
Optionally, the track matching module is specifically configured to calculate track similarities corresponding to each of the plurality of predicted tracks and other local tracks except the first target local track in the plurality of local tracks, so as to obtain a plurality of candidate track similarities; the track matching module is specifically further configured to screen out target candidate track similarity with highest track similarity from the plurality of candidate track similarities, and determine a local track corresponding to the target candidate track similarity and the first target local track as a track of the same object to be identified.
Optionally, the track matching module is specifically configured to intercept a first sub-track corresponding to a future time after the current time and a second sub-track corresponding to a history time before the current time in the predicted track, and fuse the first sub-track with the second sub-track to generate a target sub-track; the track length of the second sub-track is smaller than the track length of the first target local track; the track matching module is specifically further configured to calculate track similarity between the target sub-track and other local tracks except the first target local track in the multiple local tracks.
Optionally, the track matching module is specifically configured to intercept a third sub-track corresponding to a history time before the current time in the second target local track; the track length of the third sub-track is smaller than the target sub-track; the second target local track is any local track except the first target local track in the plurality of local tracks; the track matching module is specifically further configured to sequentially intercept fourth sub-tracks with the same track length as the third sub-track from one end of the target sub-track, obtain a plurality of fourth sub-tracks, and calculate track similarity between the third sub-track and each of the fourth sub-tracks to obtain a plurality of track similarity; the track matching module is specifically further configured to use a track similarity with a highest similarity value among the plurality of track similarities as a track similarity between the predicted track and the second target local track.
Optionally, the track updating module is specifically configured to update a target tracker corresponding to the target object based on track information of the target candidate track; the target tracker is used for tracking the motion trail of the target object.
Optionally, the apparatus further comprises: a calibration module; the calibration module is used for carrying out characteristic point matching on the images acquired by any two cameras in the plurality of cameras and calculating an image transformation matrix between the images acquired by any two cameras according to a matching result; the calibration module is further used for carrying out image fusion on the images of any two cameras according to the image transformation matrix, determining whether an overlapping area exists in the field angle between any two cameras, and generating a calibration result; the calibration result is used for indicating whether an overlapping area of the field angle exists between any two cameras.
Optionally, the acquiring module is specifically configured to determine whether an overlapping area of a field angle exists between the first camera and the second camera based on the calibration result, and acquire a plurality of local trajectories, in the target area, of at least one object to be identified acquired by the first camera and the second camera, when it is determined that the overlapping area of the field angle exists between the first camera and the second camera.
Optionally, the apparatus further comprises: a determining module; the determining module is used for determining area coordinate information of the target area based on the target coordinate system and converting the area coordinate information into first coordinate information of a coordinate system corresponding to the first camera and second coordinate information of a coordinate system corresponding to the second camera under the condition that an overlapping area of a view angle exists between the first camera and the second camera; the determining module is further configured to determine an overlapping region in a field angle of view of the first camera based on the first coordinate information, and determine an overlapping region in a field angle of view of the second camera based on the second coordinate information; the acquisition module is specifically further configured to acquire a local track acquired by the first camera based on an overlapping region in a field angle of view of the first camera, and acquire a local track acquired by the second camera based on an overlapping region in a field angle of view of the second camera.
The application also provides a computer program product comprising computer programs/instructions which when executed by a processor implement the steps of a multi-object tracking method of a multi-camera as described in any of the above.
The application also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the multi-object tracking method of any one of the above-mentioned multi-cameras when executing the program.
The application also provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of a multi-object tracking method of a multi-camera as described in any of the above.
According to the multi-camera multi-target tracking method, the multi-camera multi-target tracking device, the storage medium and the electronic equipment, firstly, a plurality of local tracks of at least one object to be identified, which are acquired by a first camera and a second camera, in a target area are acquired; the first camera and the second camera are cameras with overlapping areas at any two angles of view of the plurality of cameras; the target area is an overlapping area of angles of view of the first camera and the second camera; then, converting coordinates of the plurality of local tracks based on a target coordinate system corresponding to the target region to obtain coordinate values corresponding to each local track in the plurality of local tracks under the target coordinate system, and calculating track similarity between any two local tracks based on the coordinate values corresponding to each local track in the plurality of local tracks; finally, under the condition that the track similarity of a first track and a second track in the plurality of local tracks meets the preset similarity, determining the first track and the second track as tracks of target objects, and updating the global track of the target objects based on the first track and the second track; wherein the target object is an object in the at least one object to be identified; the track similarity between any two local tracks is as follows: and calculating based on the shape similarity of the target frames of any two local tracks and the area intersection ratio of the target frames. Thus, the success rate of cross-camera track matching in the multi-camera multi-target tracking method can be improved.
Drawings
In order to more clearly illustrate the application or the technical solutions of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the application, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a multi-camera multi-target tracking method according to the present application;
FIG. 2 is a second flow chart of the multi-camera multi-target tracking method according to the present application;
FIG. 3 is a third flow chart of the multi-camera multi-target tracking method according to the present application;
FIG. 4 is a flowchart of a multi-camera multi-target tracking method according to the present application;
FIG. 5 is a fifth flow chart of a multi-camera multi-target tracking method according to the present application;
FIG. 6 is a flowchart of a multi-camera multi-target tracking method according to the present application;
FIG. 7 is a schematic diagram of a multi-camera multi-target tracking apparatus according to the present application;
fig. 8 is a schematic structural diagram of an electronic device provided by the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
In the related art, the multi-camera multi-target tracking method includes two stages: 1. a local track generation stage that tracks each detected object within a single camera and generates a local track for it; 2. a multi-camera trajectory matching stage that matches the local trajectories in all cameras to generate their complete trajectories for each target in the entire multi-camera network. For local trajectory generation in stage one, this technique is accomplished primarily by tracking multi-target techniques under a single camera. For the multi-camera track matching stage in stage two, most existing approaches use track-to-track (tracklet-to-tracklet) matching to solve this problem.
In the process of detecting and tracking the targets by the multi-path video, the primary tracking condition is to ensure the video acquisition synchronization among the multi-path video cameras, but the problem of asynchronous video frames occasionally occurs due to the limitation of the actual acquisition condition, so that the track matching failure is caused in the multi-camera track matching stage of the stage two. In order to solve the problem of multi-camera track tracking failure caused by video dyssynchrony in the related art and accelerate tracking matching processing calculation, the embodiment of the application converts a matching mode from the track of the multi-camera at the second stage to the track (tracklet-to-tracklet) into a matching mode from the target to the target (object-to-object) according to the characteristics of the multi-camera and the characteristics of the moving target.
As shown in fig. 1, the steps of the multi-camera multi-target tracking method according to the embodiment of the present application include: 1. camera Field of View (FOV) overlap region calibration link. Generating overlapping area position information between cameras by calibrating overlapping areas among multiple cameras; the calibration link is off-line calculation, and the calibration is completed before the multi-target tracking system actually operates. 2. Multi-camera multi-target tracking. In a multi-camera multi-target tracking system, the method specifically comprises the following steps: 2.1, a local track generation stage. Detecting and tracking a target object in a single camera to generate local track information; 2.2, a local track analysis stage. Judging a local tracking track to be subjected to global track analysis according to the FOV overlapping area of the camera calibrated offline; 2.3, carrying out coordinate mapping on the local tracking track needing global track analysis, and mapping the local tracking track to a unified global coordinate system; and 2.4, carrying out cross-camera track matching analysis on the transformed local tracking tracks under the same coordinate system, and carrying out merging treatment on the same tracking track among different cameras.
The multi-camera multi-target tracking method provided by the embodiment of the application is described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
As shown in fig. 2, the multi-camera multi-target tracking method provided by the embodiment of the present application may include the following steps 201 to 203:
Step 201, a plurality of local tracks of at least one object to be identified, which are acquired by a first camera and a second camera, in a target area are acquired.
The first camera and the second camera are cameras with overlapping areas at any two angles of view of the plurality of cameras; the target area is a field angle overlapping area of the first camera and the second camera.
Illustratively, the first and second cameras are cameras in which there is an overlap region (i.e., the target region) for any two field angles in the plurality of cameras of the multi-camera multi-target tracking system. The local track is the track of the overlapping area of the view angles. The overlap region may be pre-calibrated.
The at least one object to be identified is an object appearing in the target area, and two local tracks, namely a local track acquired by the first camera and a local track acquired by the second camera, exist in the target area.
Step 202, performing coordinate transformation on the plurality of local tracks based on a target coordinate system corresponding to the target region to obtain coordinate values corresponding to each local track in the plurality of local tracks under the target coordinate system, and calculating track similarity between any two local tracks based on the coordinate values corresponding to each local track in the plurality of local tracks.
The target coordinate system is used for converting coordinate values of any pixel point in images acquired by the first camera and the second camera into coordinate values in the same coordinate system.
The multi-target tracking method of the multi-camera provided by the embodiment of the application mainly aims at matching local tracks of overlapping areas of camera angles of view, and generates a global track of any target object according to a matching result. The track matching can be used for calculating the shape similarity and the area intersection ratio between the local tracks based on the circumscribed rectangular frames corresponding to the local tracks, and the track similarity between the two local tracks is calculated by adopting a Hungary matching algorithm on the basis of a calculation result.
The circumscribed rectangle frame in the embodiment of the application may be a rectangle frame generated by a minimum rectangle which can accommodate the local track and circumscribe the local track.
For example, before performing the track similarity calculation on any two local tracks, the coordinates of each position point in the two local tracks need to be unified, that is, the coordinate value corresponding to each position point in each local track is converted from the camera coordinate system to the coordinate system corresponding to the target area, so as to facilitate the matching of the local tracks.
In the target detection and tracking link, the multi-camera multi-target tracking method provided by the embodiment of the application firstly obtains video images from each camera, and then performs target detection and tracking processing based on video streams of each camera. In the subject application embodiment, target detection and tracking is not limited to a particular detection tracker. For example, embodiments of the present application may use yolox for target detection and employ byte-track algorithms for target tracking. In the feature extraction link, the embodiment of the application extracts the position information, the movement direction information and the movement speed information of the movement track, so as to be used in the subsequent track matching.
After target detection and tracking within a single camera is completed, embodiments of the present application focus on handling track matching work between cameras. Different from the processing thought of the traditional method, the method analyzes and processes all local tracks, and only processes the first target local track of each overlapping area of the cameras, as shown in fig. 3, in the track matching process of the overlapping areas of the view angles of the two cameras of the object a and the object B acquired by the sub-camera 1 and the sub-camera 2, namely the object C, firstly, according to the target object tracked in each view angle range of the cameras, local track analysis is performed, then according to the local track analysis result, for the local track in the overlapping view angle range of the cameras, coordinate mapping is performed on the local track of the overlapping area acquired by each sub-camera based on a pseudo coordinate system (namely the target coordinate system) corresponding to the overlapping area, and then, matching calculation of the same track is performed.
Specifically, the method for calculating the track similarity in the step 202 may include the following steps 202a, 202b and 202c:
Step 202a, calculating the shape similarity of the target frame and the area intersection ratio of the target frame of any two local tracks in the plurality of local tracks based on the coordinate value corresponding to each local track.
Wherein the local trajectory comprises: a plurality of historical tracking locations; the target frame comprises: a plurality of circumscribed rectangular frames corresponding to each of the plurality of history tracking positions; one history tracking position corresponds to one circumscribed rectangular frame. The circumscribed rectangle frame is a rectangle frame generated outside the target when tracking and detecting the target.
For example, after the local trajectories in the overlapping region are acquired, the coordinates of each local trajectory may be mapped into a coordinate system corresponding to the overlapping region, and then the trajectory similarity between any two local trajectories is calculated.
Specifically, the method for calculating the shape similarity of the target frame of any two local trajectories in step 202a may include the following steps 202a1 to 202a4:
step 202a1, performing track alignment on the plurality of local tracks in the target coordinate system based on coordinate values of the plurality of historical tracking positions corresponding to each local track.
Step 202a2, obtaining the width and the height of the circumscribed rectangular frame of the first track to be matched and the second track to be matched in the plurality of local tracks, and calculating a first absolute value and a second absolute value.
The first absolute value is the absolute value of the width of a circumscribed rectangular frame of the first track to be matched and the second track to be matched; the second absolute value is a high absolute value of an circumscribed rectangular frame of the first track to be matched and the second track to be matched.
Step 202a3, calculating a first ratio of the first absolute value to the width of the target track to be matched and a second ratio of the second absolute value to the height of the target track to be matched.
Step 202a4, calculating a first difference value between the target integer and the first ratio, and a second difference value between the first difference value and the second ratio, and determining the second difference value as a shape similarity between the first track to be matched and the second track to be matched.
The target track to be matched is any one of the first track to be matched and the second track to be matched. The target integer may be 1.
Specifically, the calculation method for calculating the shape similarity in the above steps can be expressed by the following formula one:
(equation I)
Wherein,For shape similarity, W represents the width of the circumscribed rectangular frame, H represents the height of the circumscribed rectangular frame, and a and B represent two partial trajectories.
Specifically, the calculation method for calculating the area intersection ratio in the above step can be represented by the following formula two:
(equation II)
Wherein,The cross-point ratio of the areas of the target frames corresponding to the local track A and the local track B is obtained.
Step 202b, calculating the corresponding track similarity based on the shape similarity of the target frames of any two local tracks and the area intersection ratio of the target frames.
When calculating the cross-over ratio of the shape similarity of the target frame and the area of the target frame of any two local trajectories, the cross-over ratio of the shape similarity of the target frame and the area of the target frame may be calculated based on the target frame corresponding to the tracking position at the current time, or the cross-over ratio of the shape similarity of the target frame and the area of the target frame may be calculated based on the target frame corresponding to each history tracking position.
Specifically, based on the shape similarity of the target frame and the area intersection ratio of the target frame of the two partial trajectories, the trajectory similarity of the two partial trajectories can be calculated by the following formula three:
(equation three)
Wherein,Is the track similarity between the local track A and the local track B,/>And/>Is a weight coefficient.
In one possible implementation manner, the step 202b may further include the following step 202b1:
Step 202b1, calculating the sum of the similarity of sub-tracks corresponding to any two local tracks at each historical tracking position based on the shape similarity of the target frame and the area intersection ratio of the target frame of any two local tracks, and obtaining the track similarity of any two local tracks.
Illustratively, the calculation formula of the track similarity between the local track a and the local track B may be the following formula four:
(equation IV)
Wherein,And i is the i-th time point in the N time points, wherein i is the sum of the similarity of the local track A and the local track B at the N time points. The value of N is set according to the project requirement, for example, n=10.
Step 202c, determining two local tracks with track similarity greater than a preset track similarity threshold and the track similarity maximum as tracks of the same object to be identified.
For example, among the plurality of matching results, the matching success may be determined only if the track similarity of the two partial tracks is greater than a preset track similarity threshold, otherwise, the matching fails.
Step 203, determining that a first track and a second track in the plurality of local tracks are tracks of a target object when the track similarity of the first track and the second track meets a preset similarity, and updating a global track of the target object based on the first track and the second track.
Wherein the target object is an object in the at least one object to be identified; the track similarity between any two local tracks is as follows: and calculating based on the shape similarity of the target frames of any two local tracks and the area intersection ratio of the target frames.
Illustratively, the multi-target tracker update is performed after the multi-target tracking is completed. And updating the target tracker in two different modes according to the sequence of successful target matching. And when the target is successfully matched through the first threshold value, updating the target tracker according to the position where the current matching is successful. When the target is successfully matched for the second time, the track of the tracking target is updated through the predicted track sequence according to the predicted position of the successfully matched target.
Illustratively, after the local track matching is successful, the target tracking result is reversely mapped back to the corresponding camera coordinate system, and display output is performed to obtain the global track of the target object.
It should be noted that, the first track and the second track are local tracks that any two of the plurality of local tracks are successfully matched, and after any two local tracks are successfully matched, the two local tracks can be determined to be tracks of the same object.
Specifically, the step 203 may further include the following step 203a:
And step 203a, updating the target tracker corresponding to the target object based on the track information of the first track or the track information of the second track.
The target tracker is used for tracking the motion trail of the target object.
Optionally, in the embodiment of the present application, if the video frames acquired by the first camera and the second camera at the same time are asynchronous, the matching success rate may be reduced, based on which the local track of the object may be extended by further estimating a plurality of position points of the object at the future time, so as to increase the matching success rate.
Illustratively, following the step 202b, the step 202 may further include the following steps 202d and 202e:
Step 202d, generating a predicted track corresponding to the first target local track based on the historical track of the first target local track when any one of the plurality of local tracks is not matched with the local track with the track similarity greater than the preset track similarity threshold.
Step 202e, calculating the track similarity of the predicted track and other local tracks except the first target local track in the plurality of local tracks based on the coordinate values corresponding to the predicted track under the target coordinate system.
For example, when the matching is unsuccessful after a certain local track is only matched with all local tracks, the local track can be predicted, that is, the local track is prolonged, so as to eliminate the influence caused by the asynchronous time of the video frames of the camera and improve the probability of successful matching.
In one possible implementation, to increase the matching success rate, multiple predicted trajectories may be generated and each local trajectory may be matched separately.
Specifically, the step 202d may further include the following step 202d1:
Step 202d1, predicting a track of the target object to be identified corresponding to the first target local track in a future period of time based on the historical track of the first target local track, so as to obtain a plurality of predicted tracks.
Illustratively, the track similarity calculation is not required between the plurality of predicted tracks.
Further, the step 202d1 may further include the following steps 202d11 and 202d12:
Step 202d11, predicting the position of the target to be identified at the future time based on the historical track of the first target local track, so as to obtain a plurality of predicted position points.
Step 202d12, generating the plurality of predicted trajectories based on the position information corresponding to the plurality of predicted position points and the first target local trajectory.
For example, in order to generate the plurality of predicted trajectories, the position points of the object to be identified corresponding to the first target local trajectory at the future time may be predicted first, so as to obtain a plurality of predicted position points. The plurality of predicted position points may be a plurality of predicted position points at the same time in the future, or a plurality of predicted position points at different times in the future, and each time may also correspond to a plurality of different predicted position points.
Specifically, based on the step 202d1, the step 202e may further include the following steps 202e1 and 202e2:
Step 202e1, calculating the track similarity corresponding to each predicted track in the plurality of predicted tracks and other local tracks except the first target local track in the plurality of local tracks, so as to obtain a plurality of candidate track similarities.
Step 202e2, screening out target candidate track similarity with highest track similarity from the plurality of candidate track similarities, and determining a local track corresponding to the target candidate track similarity and the first target local track as the track of the same object to be identified.
For example, after screening out the target candidate track similarity with the highest track similarity, a local track in two tracks (including a predicted track and a local track) participating in calculating the target candidate track similarity can be determined as the track of the same object to be identified as the first target local track.
For example, when the track similarity calculation is performed between the plurality of predicted tracks and other local tracks, the predicted track with the largest track similarity may be selected by the following formula:
(equation five)
Wherein a i is the i-th predicted track of the plurality of predicted tracks, m is the number of the plurality of predicted tracks, and B is the other partial tracks except the first target partial track of the plurality of partial tracks.
For example, as shown in fig. 4, due to the problem of occasional video time dyssynchrony during the acquisition process of the two cameras, the target image cannot be successfully matched in the overlapping area through single target shape similarity and area intersection ratio standard. Thus, in still further object matching, as shown in the lower right corner area of fig. 4, for the object in camera No. 2, the possible position point of the object in the next frame is predicted from the historical motion trajectory of the object as shown in the A1 area, and the historical position point of the object in camera No. 2 in the last frame is determined from the historical motion trajectory. The degree of matching between the targets in camera number 1 and the candidate target set in camera number 2 is determined by calculating the similarity of the two. Through the schematic diagram, the target and the A1 target of the No. 2 camera can be seen to have high similarity, so that the target and the A1 target can be matched.
In one possible implementation, in order to make the matching result of the predicted track more accurate, the calculation of the track similarity may also be performed on the predicted track in the following manner.
Specifically, in the step 202e, the step of calculating the track similarity between the predicted track and the local tracks other than the first target local track in the plurality of local tracks may further include the following steps 202e1 and 202e2:
Step 202e1, intercepting a first sub-track corresponding to a future time after the current time and a second sub-track corresponding to a history time before the current time in the predicted track, and fusing the first sub-track and the second sub-track to generate a target sub-track.
The track length of the second sub-track is smaller than the track length of the first target local track.
Step 202e2, calculating the track similarity between the target sub-track and other local tracks except the first target local track in the plurality of local tracks.
Specifically, the step 202e2 may further include the following steps 202e21 to 202e23:
Step 202e21, intercepting a third sub-track corresponding to a history time before the current time in the second target local track.
Wherein the track length of the third sub-track is smaller than the target sub-track; the second target local track is any local track other than the first target local track in the plurality of local tracks.
Step 202e22, starting from one end of the target sub-track, sequentially intercepting fourth sub-tracks with the same track length as the third sub-track to obtain a plurality of fourth sub-tracks, and calculating track similarity between the third sub-track and each fourth sub-track in the plurality of fourth sub-tracks to obtain a plurality of track similarity.
Step 202e23, using the track similarity with the highest similarity value of the track similarities as the track similarity between the target sub-track and the second target local track.
It will be appreciated that, because it is unclear whether the reason of the failure of matching is whether the time of the video frame corresponding to the first target local track is leading or lagging, the target frame information of the first target local track at the future M time points of the current frame can be predicted based on the current frame, and the target frame information of the first target local track at the M time points tracked by the current frame are combined to form a track length of 2Target sub-track information of the first target partial track of m+1. And then matching with other local tracks based on the target sub-track information.
The specific matching process comprises the following steps: taking the length N (N < 2)M+1), and calculating the similarity between the third sub-track and the target sub-track, wherein the similarity is specifically as follows: and (3) starting from one end of the target sub-track, moving one length at a time, sequentially calculating track similarity of a third sub-track and a fourth sub-track with a corresponding length in the target sub-track to obtain a plurality of track similarities, and then selecting a track similarity maximum value from the plurality of track similarities as the track similarity between the predicted track corresponding to the target sub-track and the second target local track.
Alternatively, in the embodiment of the present application, the topological relation between the cameras can be constructed by calibrating the overlapping area in advance, and whether the overlapping area of the view angles exists between the two cameras can be determined by the topological relation in the target tracking stage.
Illustratively, before the step 201, the multi-camera multi-target tracking method provided by the embodiment of the present application may further include the following steps 204 and 205:
and 204, performing feature point matching on the images acquired by any two cameras, and calculating an image transformation matrix between the images acquired by any two cameras according to a matching result.
Step 205, performing image fusion on the images of any two cameras according to the image transformation matrix, determining whether an overlapping area exists in the angle of view between any two cameras, and generating a calibration result.
The calibration result is used for indicating whether an overlapping area of the field angle exists between any two cameras.
For example, because the shooting angles of different cameras are different, the shooting angles of the two cameras need to be unified, that is, images acquired by the two cameras are subjected to image alignment through the image transformation matrix, so that images with unified two shooting angles are obtained. Then, whether an overlapping area exists between the two images is judged.
For example, as shown in fig. 5 (a), first, feature points between two images are found and feature point matching calculation is performed, then, as shown in fig. 5 (b), a transformation matrix between the two images is calculated according to the feature point matching result, and image fusion is performed according to the transformation matrix, so as to determine an overlapping region between the two images. And establishing a pseudo coordinate system, namely a unified global coordinate system according to the overlapping region calculation result. As shown in fig. 5 (c), a uniform global coordinate system is established with reference to the left diagram.
For example, after the global coordinate system corresponding to each overlapping region is established, the coordinates of the respective position points on the local trajectories acquired by the two cameras may be mapped to the coordinates of the global coordinate system.
For example, as shown in fig. 6, a global coordinate system is used to represent the relative positional relationship between the multi-camera images, from which the overlapping regions between the different cameras can be determined and the topological relationship between the cameras established. As shown in fig. 6 below, representing the relative relationship between the two camera imaging images, the red region is the overlapping region of the two camera images. Assuming that the coordinate system is established based on the upper left corner of the left graph, when the upper left coordinate point and the lower right coordinate point of the left graph in the coordinate system are represented by a normalized coordinate system, the coordinate systems are respectively as follows: (x1=0, y1=0), (x 1 '=0.5, y1' =1.0); the upper left and lower right position points of the right graph in the coordinate system are respectively: (x2=0.4, y2= -0.1), (x 2 '=0.9, y2' =0.9). Finally, a topology map (i.e., the above calibration result) is established as shown in fig. 6 (3), which is used to indicate whether there is an overlapping area between cameras.
Specifically, based on the calibration result in the step 205, the step 201 may further include the following step 201a:
Step 201a, judging whether an overlapping area of a view angle exists between the first camera and the second camera based on the calibration result, and acquiring a plurality of local tracks, in a target area, of at least one object to be identified, acquired by the first camera and the second camera under the condition that the overlapping area of the view angle exists between the first camera and the second camera.
In an exemplary embodiment, before the local track of the overlapping area of the first camera and the second camera is acquired, it is first required to determine whether there is an overlapping area between the first camera and the second camera, and then further determine, according to a coordinate system corresponding to the overlapping area, a relative positional relationship of the overlapping area between the angles of view of the first camera and the second camera, and further determine which tracks belong to tracks in the overlapping area.
Further, the target coordinate system may also be used to determine an overlapping region in the field angles of the first camera and the second camera.
Specifically, the step 201a may further include the following steps 201a1 to 201a3:
Step 201a1, determining, based on the target coordinate system, area coordinate information of the target area, and converting the area coordinate information into first coordinate information of a coordinate system corresponding to the first camera and second coordinate information of a coordinate system corresponding to the second camera, when it is determined that there is a field angle overlapping area between the first camera and the second camera.
Step 201a2, determining an overlap region in the field of view of the first camera based on the first coordinate information, and determining an overlap region in the field of view of the second camera based on the second coordinate information.
Step 201a3, acquiring a local track acquired by the first camera based on the overlapping area in the field angle of the first camera, and acquiring a local track acquired by the second camera based on the overlapping area in the field angle of the second camera.
In the multi-camera multi-target tracking method provided by the embodiment of the application, 1, in the local track matching process, the local matching mode is adopted to replace the global matching mode, so that the calculated amount and error introduction are reduced, and the calculation efficiency is improved; 2. the track-track matching mode is converted into the target-target matching mode, so that the calculation complexity is lower; 3. in the target matching process, a hierarchical target matching method is adopted, so that the problem of asynchronous video frames in the multi-camera acquisition process is reasonably solved.
The multi-camera multi-target tracking method provided by the embodiment of the application comprises the steps of firstly, acquiring a plurality of local tracks of at least one object to be identified, which are acquired by a first camera and a second camera, in a target area; the first camera and the second camera are cameras with overlapping areas at any two angles of view of the plurality of cameras; the target area is an overlapping area of angles of view of the first camera and the second camera; then, converting coordinates of the plurality of local tracks based on a target coordinate system corresponding to the target region to obtain coordinate values corresponding to each local track in the plurality of local tracks under the target coordinate system, and calculating track similarity between any two local tracks based on the coordinate values corresponding to each local track in the plurality of local tracks; finally, under the condition that the track similarity of a first track and a second track in the plurality of local tracks meets the preset similarity, determining the first track and the second track as tracks of target objects, and updating the global track of the target objects based on the first track and the second track; wherein the target object is an object in the at least one object to be identified; the track similarity between any two local tracks is as follows: and calculating based on the shape similarity of the target frames of any two local tracks and the area intersection ratio of the target frames. Thus, the success rate of cross-camera track matching in the multi-camera multi-target tracking method can be improved.
It should be noted that, in the multi-camera multi-target tracking method provided by the embodiment of the present application, the execution subject may be a multi-camera multi-target tracking device, or a control module for executing the multi-camera multi-target tracking method in the multi-camera multi-target tracking device. In the embodiment of the application, the multi-target tracking device with multiple cameras is taken as an example to execute the multi-target tracking method with multiple cameras.
In the embodiment of the present application, the method is shown in the drawings. The multi-camera multi-target tracking method is exemplified by a drawing in combination with an embodiment of the present application. In specific implementation, the multi-camera multi-target tracking method shown in the above method drawings may also be implemented in combination with any other drawing that may be combined and is illustrated in the above embodiment, which is not repeated herein.
The multi-object tracking device of the multi-camera provided by the application is described below, and the multi-object tracking method of the multi-camera described below and the multi-object tracking method of the multi-camera described above can be referred to correspondingly.
Fig. 7 is a schematic structural diagram of a multi-camera multi-target tracking device according to an embodiment of the present application, as shown in fig. 7, specifically including:
An acquiring module 701, configured to acquire a plurality of local trajectories of at least one object to be identified acquired by the first camera and the second camera in the target area; the first camera and the second camera are cameras with overlapping areas at any two angles of view of the plurality of cameras; the target area is an overlapping area of angles of view of the first camera and the second camera; the track matching module 702 is configured to perform coordinate transformation on the plurality of local tracks based on a target coordinate system corresponding to the target region, obtain coordinate values corresponding to each local track in the plurality of local tracks under the target coordinate system, and calculate a track similarity between any two local tracks based on the coordinate values corresponding to each local track in the plurality of local tracks; a track updating module 703, configured to determine, when track similarities of a first track and a second track in the plurality of local tracks satisfy preset similarities, that the first track and the second track are tracks of a target object, and update a global track of the target object based on the first track and the second track; wherein the target object is an object in the at least one object to be identified; the track similarity between any two local tracks is as follows: and calculating based on the shape similarity of the target frames of any two local tracks and the area intersection ratio of the target frames.
Optionally, the track matching module 702 is specifically configured to calculate a shape similarity of a target frame and an area intersection ratio of the target frame of any two local tracks in the plurality of local tracks based on coordinate values corresponding to each local track; the local trajectory includes: a plurality of historical tracking locations; the target frame comprises: a plurality of circumscribed rectangular frames corresponding to each of the plurality of history tracking positions; a history tracking position corresponds to an external rectangular frame; the track matching module 702 is specifically further configured to calculate a corresponding track similarity based on a shape similarity of the target frame of any two local tracks and an area intersection ratio of the target frame; the track matching module 702 is specifically further configured to determine two local tracks with track similarity greater than a preset track similarity threshold and the track similarity being the largest as the tracks of the same object to be identified.
Optionally, the track matching module 702 is specifically configured to calculate a sum of similarities of sub-tracks corresponding to the historic tracking positions of any two local tracks based on a shape similarity of the target frame and an area intersection ratio of the target frame of any two local tracks, so as to obtain a track similarity of any two local tracks.
Optionally, the track matching module 702 is specifically configured to perform track alignment on the plurality of local tracks in the target coordinate system based on coordinate values of a plurality of historical tracking positions corresponding to each local track; the acquiring module 701 is further configured to acquire widths and heights of target frames of a first track to be matched and a second track to be matched in the plurality of local tracks, and calculate a first absolute value and a second absolute value; the first absolute value is the absolute value of the width of the target frame of the first track to be matched and the second track to be matched; the second absolute value is a high absolute value of a target frame of the first track to be matched and the second track to be matched; the track matching module 702 is specifically configured to calculate a first ratio of the first absolute value to a width of a target track to be matched and a second ratio of the second absolute value to a height of the target track to be matched; the track matching module 702 is specifically configured to calculate a first difference value between the target integer and the first ratio, and a second difference value between the first difference value and the second ratio, and determine the second difference value as a shape similarity between the first track to be matched and the second track to be matched; the target track to be matched is any one of the first track to be matched and the second track to be matched.
Optionally, the shape similarity of any two local trajectories is calculated based on the following formula:
;
wherein, For the shape similarity of the local track a and the local track B, W represents the width of the circumscribed rectangular frame, H represents the height of the circumscribed rectangular frame, and a and B represent the two local tracks.
Optionally, the track similarity of any two local tracks is calculated based on the following formula:
;
wherein, Is the track similarity of the local track A and the local track B,/>And/>Is a weight coefficient, and/>/>The sum of (2) is 1; /(I)For the area intersection ratio of the local track A and the local track B,/>The shape similarity between the local track A and the local track B.
Optionally, the track updating module 703 is specifically configured to update the target tracker corresponding to the target object based on the track information of the first track or the track information of the second track; the target tracker is used for tracking the motion trail of the target object.
Optionally, the track matching module 702 is further configured to, if any one of the plurality of local tracks does not match a local track with a track similarity greater than the preset track similarity threshold, generate a predicted track corresponding to the first target local track based on a historical track of the first target local track; the track matching module 702 is further configured to calculate, based on coordinate values corresponding to the predicted track in the target coordinate system, track similarity between the predicted track and other local tracks except the first target local track in the plurality of local tracks.
Optionally, the track matching module 702 is specifically configured to predict, based on the historical track of the first target local track, a track of the target object to be identified corresponding to the first target local track in a future period of time, so as to obtain a plurality of predicted tracks.
Optionally, the track matching module 702 is specifically configured to predict, based on the historical track of the first target local track, a position of the target to be identified at a future time to obtain a plurality of predicted position points; the track matching module 702 is specifically further configured to generate the plurality of predicted tracks based on the position information corresponding to the plurality of predicted position points and the first target local track.
Optionally, the track matching module 702 is specifically configured to calculate track similarities corresponding to each of the plurality of predicted tracks and other local tracks except the first target local track in the plurality of local tracks, so as to obtain a plurality of candidate track similarities; the track matching module 702 is specifically further configured to screen out a target candidate track similarity with a highest track similarity from the plurality of candidate track similarities, and determine a local track corresponding to the target candidate track similarity and the first target local track as a track of the same object to be identified.
Optionally, the track matching module 702 is specifically configured to intercept a first sub-track corresponding to a future time after the current time and a second sub-track corresponding to a history time before the current time in the predicted track, and fuse the first sub-track with the second sub-track to generate a target sub-track; the track length of the second sub-track is smaller than the track length of the first target local track; the track matching module 702 is specifically further configured to calculate track similarity between the target sub-track and other local tracks in the plurality of local tracks except the first target local track.
Optionally, the track matching module 702 is specifically configured to intercept a third sub-track corresponding to a history time before the current time in the second target local track; the track length of the third sub-track is smaller than the target sub-track; the second target local track is any local track except the first target local track in the plurality of local tracks; the track matching module 702 is specifically further configured to sequentially intercept a fourth sub-track with the same track length as the third sub-track from one end of the target sub-track, obtain a plurality of fourth sub-tracks, and calculate a track similarity between the third sub-track and each of the fourth sub-tracks to obtain a plurality of track similarities; the track matching module 702 is specifically further configured to use a track similarity with a highest similarity value of the plurality of track similarities as a track similarity between the predicted track and the second target local track.
Optionally, the track updating module 703 is specifically configured to update the target tracker corresponding to the target object based on track information of the target candidate track; the target tracker is used for tracking the motion trail of the target object.
Optionally, the apparatus further comprises: a calibration module; the calibration module is used for carrying out characteristic point matching on the images acquired by any two cameras in the plurality of cameras and calculating an image transformation matrix between the images acquired by any two cameras according to a matching result; the calibration module is further used for carrying out image fusion on the images of any two cameras according to the image transformation matrix, determining whether an overlapping area exists in the field angle between any two cameras, and generating a calibration result; the calibration result is used for indicating whether an overlapping area of the field angle exists between any two cameras.
Optionally, the acquiring module 701 is specifically configured to determine whether an overlapping area of a field angle exists between the first camera and the second camera based on the calibration result, and acquire a plurality of local trajectories of at least one object to be identified acquired by the first camera and the second camera in the target area if it is determined that the overlapping area of the field angle exists between the first camera and the second camera.
Optionally, the apparatus further comprises: a determining module; the determining module is used for determining area coordinate information of the target area based on the target coordinate system and converting the area coordinate information into first coordinate information of a coordinate system corresponding to the first camera and second coordinate information of a coordinate system corresponding to the second camera under the condition that an overlapping area of a view angle exists between the first camera and the second camera; the determining module is further configured to determine an overlapping region in a field angle of view of the first camera based on the first coordinate information, and determine an overlapping region in a field angle of view of the second camera based on the second coordinate information; the acquiring module 701 is specifically further configured to acquire a local track acquired by the first camera based on the overlapping area in the field angle of view of the first camera, and acquire a local track acquired by the second camera based on the overlapping area in the field angle of view of the second camera.
The multi-camera multi-target tracking device provided by the application comprises the steps that firstly, a plurality of local tracks of at least one object to be identified, which is acquired by a first camera and a second camera, in a target area are acquired; the first camera and the second camera are cameras with overlapping areas at any two angles of view of the plurality of cameras; the target area is an overlapping area of angles of view of the first camera and the second camera; then, converting coordinates of the plurality of local tracks based on a target coordinate system corresponding to the target region to obtain coordinate values corresponding to each local track in the plurality of local tracks under the target coordinate system, and calculating track similarity between any two local tracks based on the coordinate values corresponding to each local track in the plurality of local tracks; finally, under the condition that the track similarity of a first track and a second track in the plurality of local tracks meets the preset similarity, determining the first track and the second track as tracks of target objects, and updating the global track of the target objects based on the first track and the second track; wherein the target object is an object in the at least one object to be identified; the track similarity between any two local tracks is as follows: and calculating based on the shape similarity of the target frames of any two local tracks and the area intersection ratio of the target frames. Thus, the success rate of cross-camera track matching in the multi-camera multi-target tracking method can be improved.
Fig. 8 illustrates a physical structure diagram of an electronic device, as shown in fig. 8, which may include: processor 810, communication interface (Communications Interface) 820, memory 830, and communication bus 840, wherein processor 810, communication interface 820, memory 830 accomplish communication with each other through communication bus 840. The processor 810 may invoke logic instructions in the memory 830 to perform a multi-camera multi-target tracking method comprising: acquiring a plurality of local tracks of at least one object to be identified, which are acquired by a first camera and a second camera, in a target area; the first camera and the second camera are cameras with overlapping areas at any two angles of view of the plurality of cameras; the target area is an overlapping area of angles of view of the first camera and the second camera; performing coordinate conversion on the plurality of local tracks based on a target coordinate system corresponding to the target region to obtain coordinate values corresponding to each local track in the plurality of local tracks under the target coordinate system, and calculating track similarity between any two local tracks based on the coordinate values corresponding to each local track in the plurality of local tracks; under the condition that the track similarity of a first track and a second track in the plurality of local tracks meets the preset similarity, determining the first track and the second track as tracks of target objects, and updating global tracks of the target objects based on the first track and the second track; wherein the target object is an object in the at least one object to be identified; the track similarity between any two local tracks is as follows: and calculating based on the shape similarity of the target frames of any two local tracks and the area intersection ratio of the target frames.
Further, the logic instructions in the memory 830 described above may be implemented in the form of software functional units and may be stored in a computer-readable storage medium when sold or used as a stand-alone product. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a read-only memory (ROM), a random access memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In another aspect, the present application also provides a computer program product comprising a computer program stored on a computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, enable the computer to perform the multi-objective tracking method of a multi-camera provided by the above methods, the method comprising: acquiring a plurality of local tracks of at least one object to be identified, which are acquired by a first camera and a second camera, in a target area; the first camera and the second camera are cameras with overlapping areas at any two angles of view of the plurality of cameras; the target area is an overlapping area of angles of view of the first camera and the second camera; performing coordinate conversion on the plurality of local tracks based on a target coordinate system corresponding to the target region to obtain coordinate values corresponding to each local track in the plurality of local tracks under the target coordinate system, and calculating track similarity between any two local tracks based on the coordinate values corresponding to each local track in the plurality of local tracks; under the condition that the track similarity of a first track and a second track in the plurality of local tracks meets the preset similarity, determining the first track and the second track as tracks of target objects, and updating global tracks of the target objects based on the first track and the second track; wherein the target object is an object in the at least one object to be identified; the track similarity between any two local tracks is as follows: and calculating based on the shape similarity of the target frames of any two local tracks and the area intersection ratio of the target frames.
In yet another aspect, the present application also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor is implemented to perform the multi-object tracking method of the multi-camera provided above, the method comprising: acquiring a plurality of local tracks of at least one object to be identified, which are acquired by a first camera and a second camera, in a target area; the first camera and the second camera are cameras with overlapping areas at any two angles of view of the plurality of cameras; the target area is an overlapping area of angles of view of the first camera and the second camera; performing coordinate conversion on the plurality of local tracks based on a target coordinate system corresponding to the target region to obtain coordinate values corresponding to each local track in the plurality of local tracks under the target coordinate system, and calculating track similarity between any two local tracks based on the coordinate values corresponding to each local track in the plurality of local tracks; under the condition that the track similarity of a first track and a second track in the plurality of local tracks meets the preset similarity, determining the first track and the second track as tracks of target objects, and updating global tracks of the target objects based on the first track and the second track; wherein the target object is an object in the at least one object to be identified; the track similarity between any two local tracks is as follows: and calculating based on the shape similarity of the target frames of any two local tracks and the area intersection ratio of the target frames.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.
Claims (19)
1. A multi-camera multi-target tracking method, characterized by being applied to a multi-camera multi-target tracking system, the system comprising: a plurality of cameras; the method comprises the following steps:
Acquiring a plurality of local tracks of at least one object to be identified, which are acquired by a first camera and a second camera, in a target area; the first camera and the second camera are cameras with overlapping areas at any two angles of view of the plurality of cameras; the target area is an overlapping area of angles of view of the first camera and the second camera;
Performing coordinate conversion on the plurality of local tracks based on a target coordinate system corresponding to the target region to obtain coordinate values corresponding to each local track in the plurality of local tracks under the target coordinate system, and calculating track similarity between any two local tracks based on the coordinate values corresponding to each local track in the plurality of local tracks;
under the condition that the track similarity of a first track and a second track in the plurality of local tracks meets the preset similarity, determining the first track and the second track as tracks of target objects, and updating global tracks of the target objects based on the first track and the second track;
Wherein the target object is an object in the at least one object to be identified; the track similarity between any two local tracks is as follows: calculating based on the shape similarity of the target frames of any two local tracks and the area intersection ratio of the target frames;
The shape similarity of any two local tracks is calculated based on the difference value of the widths of the circumscribed rectangular frames corresponding to the two local tracks and the difference value of the heights of the circumscribed rectangular frames corresponding to the two local tracks;
The track similarity of any two local tracks is calculated based on the following formula:
;
wherein, Is the track similarity of the local track A and the local track B,/>And/>Is a weight coefficient, and/>And/>The sum of (2) is 1; /(I)For the area intersection ratio of the local track A and the local track B,/>The shape similarity between the local track A and the local track B.
2. The method according to claim 1, wherein calculating the track similarity between any two local tracks based on the coordinate value corresponding to each local track in the plurality of local tracks includes:
Calculating the shape similarity of the target frame and the area intersection ratio of the target frame of any two local tracks in the plurality of local tracks based on the coordinate value corresponding to each local track; the local trajectory includes: a plurality of historical tracking locations; the target frame comprises: a plurality of circumscribed rectangular frames corresponding to each of the plurality of history tracking positions; a history tracking position corresponds to an external rectangular frame;
Calculating the corresponding track similarity based on the shape similarity of the target frames of any two local tracks and the area intersection ratio of the target frames;
and determining two local tracks with track similarity larger than a preset track similarity threshold and maximum track similarity as tracks of the same object to be identified.
3. The method according to claim 2, wherein calculating the corresponding track similarity based on the shape similarity of the target frame and the area intersection ratio of the target frame of any two partial tracks includes:
And calculating the sum of the similarity of sub-tracks corresponding to any two local tracks at each historical tracking position based on the shape similarity of the target frame and the area intersection ratio of the target frame of any two local tracks, so as to obtain the track similarity of any two local tracks.
4. The method according to claim 2, wherein calculating the shape similarity of the target frame of any two of the plurality of local trajectories based on the coordinate values corresponding to each local trajectory comprises:
track alignment is carried out on the plurality of local tracks in the target coordinate system based on coordinate values of the plurality of historical tracking positions corresponding to each local track;
Acquiring the width and the height of a target frame of a first track to be matched and a second track to be matched in the plurality of local tracks, and calculating a first absolute value and a second absolute value;
The first absolute value is the absolute value of the width of the target frame of the first track to be matched and the second track to be matched; the second absolute value is a high absolute value of a target frame of the first track to be matched and the second track to be matched;
calculating a first ratio of the first absolute value to the width of the target track to be matched and a second ratio of the second absolute value to the height of the target track to be matched;
Calculating a first difference value of a target integer and the first ratio and a second difference value of the first difference value and the second ratio, and determining the second difference value as the shape similarity of the first track to be matched and the second track to be matched;
the target track to be matched is any one of the first track to be matched and the second track to be matched.
5. The method according to any one of claims 1 to 4, wherein the shape similarity of any two local trajectories is calculated based on the following formula:
;
wherein, For the shape similarity of the local track a and the local track B, W represents the width of the circumscribed rectangular frame, H represents the height of the circumscribed rectangular frame, and a and B represent the two local tracks.
6. The method of claim 1, wherein updating the global track of the target object based on the first track and the second track comprises:
Updating a target tracker corresponding to the target object based on the track information of the first track or the track information of the second track;
The target tracker is used for tracking the motion trail of the target object.
7. The method according to claim 2, wherein after calculating the corresponding track similarity based on the shape similarity of any two local tracks and the area intersection ratio of the target frame, the method further comprises:
Generating a predicted track corresponding to a first target local track based on a historical track of the first target local track under the condition that any first target local track in the plurality of local tracks is not matched with a local track with track similarity larger than the preset track similarity threshold;
and calculating the track similarity of the predicted track and other local tracks except the first target local track in the plurality of local tracks based on the coordinate values corresponding to the predicted track under the target coordinate system.
8. The method of claim 7, wherein the generating a predicted track corresponding to the first target local track based on the historical track of the first target local track comprises:
and predicting the track of the target object to be identified corresponding to the first target local track within a future period of time based on the historical track of the first target local track to obtain a plurality of predicted tracks.
9. The method of claim 8, wherein predicting the track of the target object to be identified corresponding to the first target local track over a future period of time based on the historical track of the first target local track to obtain a plurality of predicted tracks, comprises:
Predicting the position of the target to be identified at the future moment based on the historical track of the first target local track to obtain a plurality of predicted position points;
And generating a plurality of predicted tracks based on the position information corresponding to the plurality of predicted position points and the first target local track.
10. The method according to claim 8 or 9, wherein the calculating the track similarity of the predicted track to other local tracks than the first target local track among the plurality of local tracks includes:
Calculating the track similarity corresponding to each predicted track in the plurality of predicted tracks and other local tracks except the first target local track in the plurality of local tracks to obtain a plurality of candidate track similarity;
And screening target candidate track similarity with highest track similarity from the plurality of candidate track similarity, and determining a local track corresponding to the target candidate track similarity and the first target local track as the track of the same object to be identified.
11. The method according to claim 8 or 9, wherein the calculating the track similarity of the predicted track to other local tracks than the first target local track among the plurality of local tracks includes:
Intercepting a first sub-track corresponding to a future time after the current time and a second sub-track corresponding to a history time before the current time in the predicted track, and fusing the first sub-track and the second sub-track to generate a target sub-track; the track length of the second sub-track is smaller than the track length of the first target local track;
and calculating the track similarity of the target sub-track and other local tracks except the first target local track in the plurality of local tracks.
12. The method of claim 11, wherein the calculating the track similarity of the target sub-track to other local tracks of the plurality of local tracks than the first target local track comprises:
Intercepting a third sub-track corresponding to a historical moment before the current moment in the second target local track; the track length of the third sub-track is smaller than the target sub-track; the second target local track is any local track except the first target local track in the plurality of local tracks;
Sequentially intercepting fourth sub-tracks with the same track length as the third sub-track from one end of the target sub-track to obtain a plurality of fourth sub-tracks, and calculating track similarity of the third sub-track and each fourth sub-track in the plurality of fourth sub-tracks to obtain a plurality of track similarity;
And taking the track similarity with the highest similarity value in the track similarities as the track similarity between the predicted track and the second target local track.
13. The method of claim 10, wherein updating the global track of the target object based on the first track and the second track comprises:
updating a target tracker corresponding to the target object based on the track information of the target candidate track;
The target tracker is used for tracking the motion trail of the target object.
14. The method of claim 1, wherein prior to acquiring the plurality of local trajectories of the at least one object to be identified acquired by the first camera and the second camera within the target area, the method further comprises:
Performing feature point matching on images acquired by any two cameras, and calculating an image transformation matrix between the images acquired by any two cameras according to a matching result;
image fusion is carried out on the images of any two cameras according to the image transformation matrix, whether an overlapping area exists in the angle of view between any two cameras is determined, and a calibration result is generated;
the calibration result is used for indicating whether an overlapping area of the field angle exists between any two cameras.
15. The method of claim 14, wherein the acquiring a plurality of local trajectories of the at least one object to be identified acquired by the first camera and the second camera within the target area comprises:
And judging whether an angle of view overlapping area exists between the first camera and the second camera based on the calibration result, and acquiring a plurality of local tracks of at least one object to be identified, which are acquired by the first camera and the second camera, in a target area under the condition that the angle of view overlapping area exists between the first camera and the second camera.
16. The method according to claim 15, wherein the acquiring a plurality of local trajectories of at least one object to be identified acquired by the first camera and the second camera in the target area in the case where it is determined that there is a field angle overlapping area between the first camera and the second camera includes:
Determining region coordinate information of the target region based on the target coordinate system under the condition that a field angle overlapping region exists between the first camera and the second camera, and converting the region coordinate information into first coordinate information of a coordinate system corresponding to the first camera and second coordinate information of a coordinate system corresponding to the second camera;
Determining an overlap region in a field angle of the first camera based on the first coordinate information, and determining an overlap region in a field angle of the second camera based on the second coordinate information;
the local track acquired by the first camera is acquired based on the overlapping area in the field angle of the first camera, and the local track acquired by the second camera is acquired based on the overlapping area in the field angle of the second camera.
17. A multi-camera multi-target tracking apparatus for use in a multi-camera multi-target tracking system, the system comprising: a plurality of cameras; the device comprises:
The acquisition module is used for acquiring a plurality of local tracks of at least one object to be identified, which are acquired by the first camera and the second camera, in the target area; the first camera and the second camera are cameras with overlapping areas at any two angles of view of the plurality of cameras; the target area is an overlapping area of angles of view of the first camera and the second camera;
the matching module is used for carrying out coordinate conversion on the plurality of local tracks based on a target coordinate system corresponding to the target region to obtain coordinate values corresponding to each local track in the plurality of local tracks under the target coordinate system, and calculating track similarity between any two local tracks based on the coordinate values corresponding to each local track in the plurality of local tracks;
the updating module is used for determining a first track and a second track as tracks of a target object under the condition that track similarity of the first track and the second track in the plurality of local tracks meets preset similarity, and updating the global track of the target object based on the first track and the second track;
Wherein the target object is an object in the at least one object to be identified; the track similarity between any two local tracks is as follows: calculating based on the shape similarity of the target frames of any two local tracks and the area intersection ratio of the target frames;
The shape similarity of any two local tracks is calculated based on the difference value of the widths of the circumscribed rectangular frames corresponding to the two local tracks and the difference value of the heights of the circumscribed rectangular frames corresponding to the two local tracks;
The track similarity of any two local tracks is calculated based on the following formula:
;
wherein, Is the track similarity of the local track A and the local track B,/>And/>Is a weight coefficient, and/>And/>The sum of (2) is 1; /(I)For the area intersection ratio of the local track A and the local track B,/>The shape similarity between the local track A and the local track B.
18. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the multi-object tracking method of a multi-camera as claimed in any one of claims 1 to 16 when the program is executed.
19. A computer-readable storage medium, characterized in that a computer program is stored thereon, which, when being executed by a processor, implements the steps of the multi-object tracking method of a multi-camera according to any of claims 1 to 16.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410132014.2A CN117670939B (en) | 2024-01-31 | 2024-01-31 | Multi-camera multi-target tracking method and device, storage medium and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410132014.2A CN117670939B (en) | 2024-01-31 | 2024-01-31 | Multi-camera multi-target tracking method and device, storage medium and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117670939A CN117670939A (en) | 2024-03-08 |
CN117670939B true CN117670939B (en) | 2024-04-19 |
Family
ID=90064531
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410132014.2A Active CN117670939B (en) | 2024-01-31 | 2024-01-31 | Multi-camera multi-target tracking method and device, storage medium and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117670939B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118505758B (en) * | 2024-07-22 | 2024-10-18 | 中船(浙江)海洋科技有限公司 | Ship positioning and track tracking method based on multi-camera array |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111709975A (en) * | 2020-06-22 | 2020-09-25 | 上海高德威智能交通系统有限公司 | Multi-target tracking method and device, electronic equipment and storage medium |
CN112669345A (en) * | 2020-12-30 | 2021-04-16 | 中山大学 | Cloud deployment-oriented multi-target track tracking method and system |
CN113190711A (en) * | 2021-03-26 | 2021-07-30 | 南京财经大学 | Video dynamic object trajectory space-time retrieval method and system in geographic scene |
CN114092720A (en) * | 2020-08-05 | 2022-02-25 | 北京万集科技股份有限公司 | Target tracking method and device, computer equipment and storage medium |
CN116452632A (en) * | 2023-03-28 | 2023-07-18 | 重庆长安汽车股份有限公司 | Cross-camera track determination method, device, equipment and storage medium |
CN116645396A (en) * | 2023-04-28 | 2023-08-25 | 苏州浪潮智能科技有限公司 | Track determination method, track determination device, computer-readable storage medium and electronic device |
WO2023197232A1 (en) * | 2022-04-14 | 2023-10-19 | 京东方科技集团股份有限公司 | Target tracking method and apparatus, electronic device, and computer readable medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117409032A (en) * | 2022-07-07 | 2024-01-16 | 富士通株式会社 | Method, device and storage medium for multi-target multi-camera tracking |
-
2024
- 2024-01-31 CN CN202410132014.2A patent/CN117670939B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111709975A (en) * | 2020-06-22 | 2020-09-25 | 上海高德威智能交通系统有限公司 | Multi-target tracking method and device, electronic equipment and storage medium |
CN114092720A (en) * | 2020-08-05 | 2022-02-25 | 北京万集科技股份有限公司 | Target tracking method and device, computer equipment and storage medium |
CN112669345A (en) * | 2020-12-30 | 2021-04-16 | 中山大学 | Cloud deployment-oriented multi-target track tracking method and system |
CN113190711A (en) * | 2021-03-26 | 2021-07-30 | 南京财经大学 | Video dynamic object trajectory space-time retrieval method and system in geographic scene |
WO2023197232A1 (en) * | 2022-04-14 | 2023-10-19 | 京东方科技集团股份有限公司 | Target tracking method and apparatus, electronic device, and computer readable medium |
CN116452632A (en) * | 2023-03-28 | 2023-07-18 | 重庆长安汽车股份有限公司 | Cross-camera track determination method, device, equipment and storage medium |
CN116645396A (en) * | 2023-04-28 | 2023-08-25 | 苏州浪潮智能科技有限公司 | Track determination method, track determination device, computer-readable storage medium and electronic device |
Also Published As
Publication number | Publication date |
---|---|
CN117670939A (en) | 2024-03-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111462200B (en) | Cross-video pedestrian positioning and tracking method, system and equipment | |
CN108960211B (en) | Multi-target human body posture detection method and system | |
LU102028B1 (en) | Multiple view multiple target tracking method and system based on distributed camera network | |
US10223595B2 (en) | Methods, devices and computer programs for tracking targets using independent tracking modules associated with cameras | |
Milan et al. | Challenges of ground truth evaluation of multi-target tracking | |
CN111899282B (en) | Pedestrian track tracking method and device based on binocular camera calibration | |
CN107452015B (en) | Target tracking system with re-detection mechanism | |
CN117670939B (en) | Multi-camera multi-target tracking method and device, storage medium and electronic equipment | |
Boniardi et al. | Robot localization in floor plans using a room layout edge extraction network | |
CN111512317A (en) | Multi-target real-time tracking method and device and electronic equipment | |
CN104685513A (en) | Feature based high resolution motion estimation from low resolution images captured using an array source | |
CN111144213A (en) | Object detection method and related equipment | |
US20150104067A1 (en) | Method and apparatus for tracking object, and method for selecting tracking feature | |
CN110827321B (en) | Multi-camera collaborative active target tracking method based on three-dimensional information | |
CN111160212A (en) | Improved tracking learning detection system and method based on YOLOv3-Tiny | |
US11544926B2 (en) | Image processing apparatus, method of processing image, and storage medium | |
CN111354022B (en) | Target Tracking Method and System Based on Kernel Correlation Filtering | |
CN115063454B (en) | Multi-target tracking matching method, device, terminal and storage medium | |
CN112200157A (en) | Human body 3D posture recognition method and system for reducing image background interference | |
CN113608663A (en) | Fingertip tracking method based on deep learning and K-curvature method | |
CN113538523B (en) | Parking space detection tracking method, electronic equipment and vehicle | |
JP2019012497A (en) | Portion recognition method, device, program, and imaging control system | |
KR100994722B1 (en) | Method for tracking moving object on multiple cameras using probabilistic camera hand-off | |
CN117333538A (en) | Multi-view multi-person human body posture estimation method based on local optimization | |
CN106971381A (en) | A kind of wide angle camera visual field line of demarcation generation method with the overlapping ken |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |