CN108470332A - A kind of multi-object tracking method and device - Google Patents

A kind of multi-object tracking method and device Download PDF

Info

Publication number
CN108470332A
CN108470332A CN201810069852.4A CN201810069852A CN108470332A CN 108470332 A CN108470332 A CN 108470332A CN 201810069852 A CN201810069852 A CN 201810069852A CN 108470332 A CN108470332 A CN 108470332A
Authority
CN
China
Prior art keywords
frame
tracking
target
jth
detection block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810069852.4A
Other languages
Chinese (zh)
Other versions
CN108470332B (en
Inventor
吴婷璇
陈杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bo Yun Vision (beijing) Technology Co Ltd
Original Assignee
Bo Yun Vision (beijing) Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bo Yun Vision (beijing) Technology Co Ltd filed Critical Bo Yun Vision (beijing) Technology Co Ltd
Priority to CN201810069852.4A priority Critical patent/CN108470332B/en
Publication of CN108470332A publication Critical patent/CN108470332A/en
Application granted granted Critical
Publication of CN108470332B publication Critical patent/CN108470332B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides a kind of multi-object tracking method and device, which includes:By target detection obtain it is multiple tracking targets jth frame couple candidate detection frame;Multiple tracking targets are associated with multiple tracking targets in the region of interest ROI of 1 frame of jth in the couple candidate detection frame of jth frame, obtain multiple tracking target corresponding detection blocks in jth frame;Determine that at least two tracking targets in multiple tracking targets overlap between the associated detection block of the i-th frame and cancel overlapping between the associated detection block of jth frame;Target is tracked using Classification and Identification model pair at least two to reclassify in the associated detection block of jth frame, obtains detection block of at least two tracking targets after jth frame reclassifies.The present invention can make the tracking target to overlap during tracking correctly match its position when cancelling overlapping, so that it is guaranteed that the accuracy of multiple target tracking.

Description

A kind of multi-object tracking method and device
Technical field
The present invention relates to technical field of image processing, more particularly to a kind of multi-object tracking method and target following Device.
Background technology
Target following includes monotrack and multiple target tracking.Monotrack can by the apparent modeling of target or Person's motion modeling, the problems such as to handle illumination, deformation, block.Multiple target tracking problem is with regard to much more complex, in addition to monotrack Outside the problem of encountering, it is also necessary to the association matching between target.
Multitarget Tracking belongs to a research hotspot of computer vision field.Multiple target tracking refers to utilizing calculating Machine determines position, the size of interested each self-movement target with certain notable visual signature in the video sequence With the complete movement locus of each target.It is suffered from widely in vehicle-mounted auxiliary system, military field and intelligent security guard field Application.Multitarget Tracking generally uses following two schemes:
1. first detecting target, feature description then is carried out to each target detected, further according to the feature to every A target is into line trace.
2. for long-time tracking or tracked target, there are the tracking in the case of change in shape, and many people are using detection Method replace tracking, the detection target that successive frame obtains is associated, to obtain the complete track of target.
The overlap problem of target is often encountered in multiple target tracking task.Tracking target and other targets are occurred After overlapping, it may occur that target following path matching mistake.
Invention content
In view of this, a kind of multi-object tracking method of present invention offer and device, are dedicated to solving mesh in multiple target tracking The problem of marking pursuit path matching error.
In a first aspect, an embodiment of the present invention provides a kind of multi-object tracking methods, including:
By target detection obtain it is multiple tracking targets jth frame couple candidate detection frame;
By it is multiple tracking targets jth frame couple candidate detection frame with it is multiple tracking targets -1 frame of jth area-of-interest ROI is associated, and obtains multiple tracking target corresponding detection blocks in jth frame;
Determine that at least two tracking targets in multiple tracking targets overlap simultaneously between the associated detection block of the i-th frame And cancel overlapping between the associated detection block of jth frame;
Target is tracked using Classification and Identification model pair at least two to reclassify in the associated detection block of jth frame, is obtained To at least two detection blocks of the tracking targets after jth frame reclassifies so that at least two tracking targets jth frame again Sorted detection block is associated at least two tracking targets in the ROI of the (i-1)-th frame, and wherein i, j are positive integer and i<j.
In one embodiment, the multi-object tracking method of first aspect further includes:
Establish the Classification and Identification model of multiple tracking targets;
When determining that at least two tracking targets overlap between the associated detection block of the i-th frame, at least two are utilized The ROI of frame of the target before when overlapping is tracked, Classification and Identification model is updated,
Wherein target is tracked using Classification and Identification model pair at least two in the associated detection block of jth frame again to be divided Class, including:
It is carried out again in the associated detection block of jth frame using updated Classification and Identification model pair at least two tracking target Classification.
In one embodiment, the multi-object tracking method of first aspect determine it is multiple tracking targets at least two When tracking target overlaps between the associated detection block of the i-th frame and cancels overlapping between the associated detection block of jth frame, Including:
It calculates multiple friendships for tracking targets between the associated detection block of jth frame and compares IOU;
If IOU of multiple tracking targets between the associated detection block of jth frame is more than specific threshold, there is at least two The detection block of tracking target overlaps,
If IOU of multiple tracking targets between the associated detection block of jth frame is less than or equal to specific threshold, it is determined that The detection block of at least two tracking targets does not overlap.
In one embodiment, further include by multiple tracking targets in jth frame in the multi-object tracking method of first aspect Couple candidate detection frame be associated in the region of interest ROI of -1 frame of jth with multiple tracking targets:
Calculate ROI that multiple tracking targets are determined in -1 frame of jth with it is multiple track targets jth frame couple candidate detection frame It hands over and than IOU, the IOU values maximum of target will be each tracked in multiple tracking targets and is more than the corresponding candidate of some specific threshold Detection block, as tracking target jth frame detection block.
In one embodiment, further include the candidate to multiple tracking targets in the multi-object tracking method of first aspect Detection block processing:
After the completion of each of multiple tracking targets tracking target is associated with the couple candidate detection frame of jth frame, it is associated with Detection block deleted from couple candidate detection frame queue;
For not associated successful tracking target, if its continuous several not associated success of frame, by it from tracking target team It is deleted in row;
For not associated successful couple candidate detection frame, if it continuously occurs in continuous several frames, as it is new with Tracking object queue is added in track target.
Second aspect, provides a kind of multiple target tracking device, which includes:
Acquisition module, for by target detection obtain it is multiple tracking targets jth frame couple candidate detection frame;
Relating module, for tracking targets in the couple candidate detection frame of jth frame and multiple tracking targets in -1 frame of jth by multiple Region of interest ROI be associated, obtain multiple tracking target corresponding detection blocks in jth frame;
Determining module, for determining at least two tracking targets in multiple tracking targets in the associated detection block of the i-th frame Between overlap and between the associated detection block of jth frame cancel overlapping;
Sort module, for using Classification and Identification model pair at least two track target the associated detection block of jth frame into Row reclassify, obtain detection block of at least two tracking targets after jth frame reclassifies so that above-mentioned at least two with Detection block of the track target after jth frame reclassifies is associated at least two tracking targets in the ROI of the (i-1)-th frame, wherein I, j are positive integer and i<j.
In one embodiment, in the multiple target tracking device of second aspect, further include:
Module is established, the Classification and Identification model for establishing multiple tracking targets;
Update module, for determine at least two tracking targets before the i-th frame between associated detection block occur weight When folded, using the ROI of frame of at least two tracking targets before when overlapping, Classification and Identification model is updated, wherein classifying Module is divided using updated Classification and Identification model pair at least two tracking target in the associated detection block of the i-th frame again Class.
In one embodiment, in the multiple target tracking device of second aspect, determining module is specifically used for:
Determine that at least two tracking targets in multiple tracking targets overlap simultaneously between the associated detection block of the i-th frame And cancel overlapping between the associated detection block of jth frame;
It calculates multiple friendships for tracking targets between the associated detection block of jth frame and compares IOU;
Determine that if IOU of multiple tracking targets between the associated detection block of jth frame is more than specific threshold, have at least The detection block of two multiple tracking targets overlaps;If IOU of multiple tracking targets between the associated detection block of jth frame is small When specific threshold, it is determined that the detection block of more than at least two tracking target does not overlap.
In one embodiment, in the multiple target tracking device of second aspect, relating module is specifically used for:
By it is multiple tracking targets jth frame couple candidate detection frame with it is multiple tracking targets -1 frame of jth area-of-interest ROI is associated;
Calculate ROI that multiple tracking targets are determined in -1 frame of jth with it is multiple track targets jth frame couple candidate detection frame It hands over and than IOU, the IOU values maximum of target will be each tracked in multiple tracking targets and is more than the corresponding candidate of some specific threshold Detection block, as tracking target jth frame detection block.
In one embodiment, in the multiple target tracking device of second aspect, relating module is additionally operable to:
After the completion of each of multiple tracking targets tracking target is associated with the couple candidate detection frame of jth frame, it is associated with Detection block deleted from couple candidate detection frame queue;
For not associated successful tracking target, if its continuous several not associated success of frame, by it from tracking target team It is deleted in row;
For not associated successful couple candidate detection frame, if it continuously occurs in continuous several frames, as it is new with Tracking object queue is added in track target.
An additional aspect of the present invention provides a kind of computer readable storage medium, is stored thereon with the executable finger of computer It enables, wherein method as described above is realized when executable instruction is executed by processor.
An additional aspect of the present invention provides a kind of computer equipment, including:Memory, processor and it is stored in memory In and the executable instruction that can run in the processor, wherein processor realizes side as described above when executing executable instruction Method.
In conclusion an embodiment of the present invention provides a kind of multi-object tracking method, by target detection obtain it is multiple with Couple candidate detection frame of the track target in jth frame;By multiple tracking targets jth frame couple candidate detection frame and multiple tracking targets the The region of interest ROI of j-1 frames is associated, and obtains multiple tracking target corresponding detection blocks in jth frame;Determination is more At least two tracking targets in a tracking target overlap between the associated detection block of the i-th frame and are associated in jth frame Detection block between cancel overlapping;Using Classification and Identification model pair at least two track target the associated detection block of jth frame into Row reclassifies, and obtains detection block of at least two tracking targets after jth frame reclassifies, so that at least two tracking mesh It is marked on the detection block after jth frame reclassifies to be associated in the ROI of the (i-1)-th frame at least two tracking targets, wherein i, j are Positive integer and i<j.The tracking target that the present invention can make tracking overlap in the process correctly matches it and is cancelling when being overlapped Position, so that it is guaranteed that the accuracy of multiple target tracking.
It should be understood that above general description and following detailed description is only exemplary and explanatory, not It can the limitation present invention.
Description of the drawings
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for For those of ordinary skill in the art, without creative efforts, it can also be obtained according to these attached drawings other Attached drawing, wherein:
Fig. 1 is one embodiment of the invention for schematic flow when tracking target overlapping to occur in multiple target tracking Figure;
Fig. 2 is the schematic flow chart for multiple target tracking of one embodiment of the invention;
Fig. 3 be one embodiment of the invention for tracking target overlapping in multiple target tracking, to occur when, it is micro- to disaggregated model The schematic flow chart of tune.
Fig. 4 is the block diagram according to a kind of multiple target tracking device 400 shown in an exemplary embodiment of the invention.
Fig. 5 is the block diagram according to the computer equipment for multiple target tracking shown in an exemplary embodiment of the invention.
Specific implementation mode
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation describes, it is clear that described embodiment is only a part of the embodiment of the present invention, instead of all the embodiments.According to this Embodiment in invention, every other reality obtained by those of ordinary skill in the art without making creative efforts Example is applied, shall fall within the protection scope of the present invention.
Fig. 1 is the schematic flow chart of multi-object tracking method according to an embodiment of the invention.The method of Fig. 1 can be by Computing device, for example, server, to execute.The method for tracking target of Fig. 1 includes following content.
110, by target detection obtain it is multiple tracking targets jth frame couple candidate detection frame.
In piece image (for example, each frame image in video), be different from the enclosed region of ambient enviroment often by Referred to as target.The process for providing the position of target in the picture is known as detecting.For example, trained target inspection can be utilized Survey grid network or model detect the position of multiple tracking targets and its classification information in current video frame.
For example, couple candidate detection frame can be obtained in the following manner:To application scenarios or close to the several of application scenarios Picture is labeled, for object detector of the training based on deep learning;Multiple tracking targets are obtained using object detector In the couple candidate detection frame of jth frame.
120, multiple tracking targets that step 110 obtains are existed in the couple candidate detection frame of jth frame with this multiple tracking target The region of interest ROI of -1 frame of jth is associated, and obtains multiple tracking target corresponding detection blocks in jth frame.
In machine vision or image procossing, from processed image with the side such as box, circle, ellipse, irregular polygon Formula sketches the contours of region to be treated, referred to as area-of-interest (Region of Interest, ROI).Due to area-of-interest Area it is smaller, therefore processing time can be reduced, increase precision.In an embodiment of the present invention, with ROI be selected as rectangle frame into Row illustrates.
Association, also referred to as data correlation, are the typical processing methods being commonly used in multiple target tracking task, for solving The certainly matching problem between target.For example, can be using handing over and carrying out data correlation than (IOU), the embodiment of the present invention is simultaneously unlimited It in this, can also be associated using other methods, for example, probabilistic data association, joint probability data association and more hypothesis Track algorithm.
Tracking is that the position (ROI) of target is determined in a certain frame, obtains the relevant information of target, such as color characteristic, ladder Feature etc. is spent, then search obtains the specific location of target in subsequent frames.
130, determine that weight occurs between the associated detection block of jth frame at least two tracking targets in multiple tracking targets The folded and cancellation overlapping between the associated detection block of jth frame.
Specifically, it can judge multiple tracking targets between the associated detection block of jth frame by calculating friendship and than (IOU) It overlaps.Here friendship and ratio, be jth frame, to multiple tracking objects associated detection block, the meter that is done between any two It calculates.For example, can calculate in jth frame, each associated detection block of tracking object and the associated detection block of other tracking objects it Between whether be overlapped.For example, by being calculated, handed between two tracking targets and than being more than 0.3, then illustrate multiple tracking targets it Between the two tracking targets have occurred it is overlapped.For another example in multiple tracking targets, except the above-mentioned tracking mesh to overlap Tracking target except mark, the friendship and ratio that any of tracking target to overlap with the two is calculated are more than 0.3, Then illustrate more than two tracking target between have occurred it is overlapped.And so on, the tracking target to overlap can be more It is a.For another example when being calculated, two tracking targets are handed over and compare less than or equal to 0.3, illustrate between the two track targets not Occur overlapped.Here, overlapping is referred to as blocking.It should be understood that determine whether that the method for overlapping is not limited to hand over and compare, It can also be using it is judged that tracking the method for target overlapping to realize.
In an embodiment of the present invention, according to foregoing judgment method, at least two tracking targets occur in the i-th frame Overlapping, and in jth frame later, eliminate overlapping.
140, the tracking target to be overlapped to such as preceding at least two using Classification and Identification model is cancelling overlapping moment The associated detection block of jth frame reclassified, obtain detection of at least two tracking targets after jth frame reclassifies Frame, so that detection block of at least two tracking targets to overlap after jth frame reclassifies and at least two tracking mesh The ROI for being marked on the (i-1)-th frame is associated.
Here i, j are positive integer, and i<j.
Specifically, target detection is carried out first against each frame in video, with the candidate of the multiple tracking targets of determination Detection block, and multiple tracking targets are closed with multiple tracking targets in the ROI of previous frame in the couple candidate detection frame of present frame Connection obtains multiple tracking targets in the corresponding detection block of present frame.Further, if present frame has several tracking targets to send out Raw overlapping, and eliminate overlapping in a certain frame later, then in that frame for cancelling overlapping, Classification and Identification model can be utilized Detection is surveyed to the tracking target association to overlap to reclassify, the detection block after being reclassified, so that in weight New sorted detection block with its when overlapping where the ROI of previous frame of frame be associated.
After tracking target generation is overlapped, data correlation is carried out it is possible that mesh according only to the ROI information of front and back frame Target trace information corresponds to the situation of mistake.
In an embodiment of the present invention, according to foregoing judgment method, at least two tracking targets are cancelled in jth frame Overlapping, can utilize Classification and Identification model by the tracking target once to overlap cancel be overlapped after associated detection block weight New classification so that at least two tracking targets to overlap obtain it and be associated with correct detection block, to ensure after overlapping The accuracy of tracking.In addition, since a model need not be established to each tracking target into line trace, reduce calculating Complexity, to ensure the real-time of tracking.
Optionally, overlapped tracking target occurs, during the entire process of overlapping, whether can not judge matching Correctly.
Optionally, as another embodiment, the method for Fig. 1 further includes:Establish the Classification and Identification model of multiple tracking targets; When determining that at least two tracking targets overlap between the associated detection block of the i-th frame, at least two tracking targets are utilized The ROI of frame before when overlapping updates Classification and Identification model.Wherein, in 140, updated classification can be utilized Identification model pair at least two tracks target and is reclassified in the associated detection block of jth frame.
Classification and Identification model can be established and training in the following way:If to application scenarios or close to application scenarios Dry video, using frame by frame or several frames in interval mark, for Classification and Identification model of the training based on deep learning.Further, may be used To utilize the ROI re -training Classification and Identification models that overlapped pervious tracking target occurs, and using after re -training Classification and Identification model reclassifies the detection block of the tracking target to overlap.
Based on the embodiment of the present invention, during multiple target tracking, in the case where tracking target occurs overlapped, Advance trained Classification and Identification model can be finely adjusted.For example, using overlapped pervious tracking target ROI occurs Re -training Classification and Identification model;After tracking target cancels overlapping, correspondence can utilize the Classification and Identification model finely tuned Detection block to tracking target reclassifies.Due to utilizing the overlapped pervious tracking target ROI re -trainings of generation Classification and Identification model can improve the precision of Classification and Identification model, thereby further ensure that the standard of the pursuit path of multiple target tracking True property and real-time.
Optionally, as another embodiment, the method for Fig. 1 further includes:By the last of the convolutional network of Classification and Identification model The number of the output node of one full articulamentum is revised as the number of equal at least about two tracking targets.
In other words, there is N number of tracking target to overlap, just change the number of the output node of the last one full articulamentum At N.Due to updated Classification and Identification model only need at least two tracking targets and not all tracking target is classified, Therefore, the speed that Classification and Identification model is classified can be improved by correspondingly reducing the number of the output node of full articulamentum.
Optionally, as another embodiment, the method for Fig. 1 further includes:Determine it is multiple tracking targets at least two with Track target overlaps between the associated detection block of the i-th frame and cancels overlapping, packet between the associated detection block of jth frame It includes:
It calculates multiple friendships for tracking targets between the associated detection block of jth frame and compares IOU;
If IOU of multiple tracking targets between the associated detection block of jth frame is more than specific threshold, there is at least two The detection block of tracking target overlaps;
If IOU of multiple tracking targets between the associated detection block of jth frame is less than or equal to specific threshold, it is determined that The detection block of at least two tracking targets does not overlap.
Specifically, detecting the friendship of candidate frame and than (IOU), calculation formula is as follows:
Wherein, BOX1, BOX2 indicate that 1,2 two tracking box, molecule indicate that the intersection of the two, denominator indicate the two respectively Union.
Judge that overlapping needs to calculate to hand over and than (IOU), when handing over and than being more than 0.3, illustrate between multiple tracking targets extremely Few two tracking target has occurred overlapped.
It should be noted that friendship here and ratio, are in present frame, the institute between the associated detection block of all tracking objects The calculating done.It has been described in more detail in step 130, has no longer done excessive description herein.
Optionally, as another embodiment, the method for Fig. 1 further includes:By it is multiple tracking targets jth frame couple candidate detection Frame is associated with multiple tracking targets in the region of interest ROI of -1 frame of jth, including:
Calculate ROI that multiple tracking targets are determined in -1 frame of jth with it is multiple track targets jth frame couple candidate detection frame It hands over and than IOU, the IOU values maximum of target will be each tracked in multiple tracking targets and is more than the corresponding candidate of some specific threshold Detection block, as tracking target jth frame detection block.
Specifically, data correlation concrete operations are to calculate the ROI and jth frame that each tracking target is determined in -1 frame of jth All friendships for detecting candidate frames and ratio, are handed over and the concept and computational methods of ratio are being described above, no longer do excessive description herein. It should be noted that being used for the friendship of data correlation and than being done in jth frame, between the associated detection block of multiple tracking objects Calculating.For example, the associated detection block of each tracking object and the associated detection block of other tracking objects in jth frame can be calculated Between whether be overlapped.It should be understood that determining whether that the method for overlapping is not limited to hand over and compare, it can also be used it is judged that tracking mesh Indicated weight folded method is realized.
According to an embodiment of the invention, it hands over associated with pass and is recorded with matrix than data.Specific method is, such as jth -1 Frame has N number of tracking target, jth frame to have M tracking target, is handed over and than data, matrix the inside with N × M ranks matrix A to record Each elements A (n, m) indicate the friendship and ratio of n-th of target of -1 frame of jth and m target of jth frame.
Assuming that the same target moves very little in adjacent interframe, chooses the result of detection and track target in previous frame position Nearest test position is as target in the position of present frame.It is maximum and corresponding more than some specific threshold value to choose IOU values The detection block of present frame, as the tracking target in the position of present frame.This threshold value generally takes 0.3 or so.
Optionally, as another embodiment, the method for Fig. 1 further includes:When each of multiple tracking targets track target After the completion of being associated with the couple candidate detection frame of jth frame, its associated detection block is deleted from couple candidate detection frame queue;For not closing Connection successfully from tracking object queue delete it if its continuous several not associated success of frame by tracking target;For not closing Join successful couple candidate detection frame, if it continuously occurs in continuous several frames, as new tracking target, tracking mesh is added Mark queue.
Specifically, for a certain video frame, trained target detection network or model, detection can be utilized to exist Multiple tracking targets in current video frame, the couple candidate detection frame queue as tracking target.Object queue is tracked, is exactly this video The summation for the target that all frames before frame determine.When some tracked in object queue tracks target and a certain time in this frame After the completion of selecting detection block to be associated with, the detection block of this tracking target association is deleted from couple candidate detection frame queue.It does so Purpose is to reduce the follow-up detection block for needing analyzing processing, improve tracking velocity and efficiency.
For the tracking target being successfully associated in a certain video frame, not with any detection block in couple candidate detection frame, such as This tracking target of fruit is successfully associated in next continuous several frames still without with any couple candidate detection frame, then is tracked this Target is deleted from tracking object queue.The purpose for the arrangement is that in order to reduce tracking destination number, interference is reduced, improves tracking Speed and efficiency.
For in some video frame, not with any successful couple candidate detection frame of tracking target association, if this wait Select detection block all to occur in next continuous several frames, just using this couple candidate detection frame as new tracking target, addition with Track object queue.This is done because the targets that the short time occurs in the contact of a certain zonule, and can be determined that substantially will track Target.The purpose done so can make tracking more comprehensively, accurately.
According to an embodiment of the invention, couple candidate detection frame can be obtained in the following manner:To application scenarios or close to Several pictures of application scenarios are labeled, for object detector of the training based on deep learning;It is obtained using object detector Take multiple tracking targets in the couple candidate detection frame of any pending frame.
According to an embodiment of the invention, the Classification and Identification model of multiple tracking targets can be established in the following way.Example Such as, several videos to application scenarios or close to application scenarios are based on using frame by frame or several frames in interval are marked for training The Classification and Identification model of deep learning.
Fig. 2 is the schematic flow chart of the whole process according to an embodiment of the invention for multiple target tracking.
The technology of multiple target tracking can track movement locus of multiple targets in a period of time, solve perception goer The problem of body.
Further include following steps as described above, before step 110:
210, it builds and trains network model.
The network model includes the target detection model based on deep learning, for example, Faster RCNN (Faster Region based Neural Convolution Network)、SSD(Single Shot MultiBox Detector)、 YOLO (You Only Look Once) even depth neural network model.For example, to application scenarios or close to application scenarios Several pictures are labeled, to object detector of the training based on deep learning.
Specifically, a pictures can be inputted, by multilayer convolution meter based on the target detection model of deep learning It calculates, obtains the characteristic pattern of image;Then the size distribution etc. according to object in the picture designs corresponding anchor point coordinate frame, right Anchor point position carry out classification and encirclement frame return, find the position there may be object in image, then using these positions as Area-of-interest (ROI) is mapped on characteristic pattern, takes out the convolution feature of each position, is then carried out with trained parameter big It measures inner product to calculate, obtains the feature of vector mode and returned to carry out further classification and encirclement frame, final obtain is deposited in image In the position of object and classification.The effect of convolutional layer is extraction characteristics of image.Two-dimensional convolution and biasing are carried out to input picture After operation, nonlinear activation function is reused, a convolution results can be obtained, that is, obtains a characteristics of image.
In 210, which further includes the object-class model based on deep learning, such as ResNet18, The disaggregated models such as CaffeNet or GoogleNet.For the high complex data collection of similarity degree, carried out using triple loss Train classification models.
Structure training network model includes as follows:
It is more than certain amount to sample number with the depth characteristic of the deep neural network model extraction tracking target of pre-training Tracking target carry out feature extraction, classification based training, obtain disaggregated model.
Specifically, can be labeled with acquisition applications scene or close to application scenarios real video.Mask method can To be frame by frame, can also be such as every 1,2 or 5 frames, to obtain crude sampling image, different tracking targets mark every several frames Different id, structure are applied to the database of tracking target image classification, and database is made of the picture with mark and label Disaggregated model.
In deep learning target classification network, the effect of full articulamentum is to connect all features, and output valve is given Disaggregated model.
According to an embodiment of the invention, it is assumed that the tracking target category number counted is N classes, then most by sorter network The output number of the full articulamentum of the latter is set as N, and the disaggregated model is made to have the ability for distinguishing this N number of classification.
220, input video frame.
230, the couple candidate detection frame of target is obtained in present frame using the object detector based on deep learning.
This step is similar with aforementioned step 110, no longer does excessive description herein.
240, to present frame obtain detection candidate frame with track target previous frame region of interest ROI (Region Of Interest) data correlation is carried out, obtain tracking target corresponding detection block in the current frame.
250, judge whether multiple tracking targets overlap between the associated detection block of present frame.
260, tracking result is provided if tracking target and other tracking target detection frames are not be overlapped, i.e., using handing over and compare Into line trace target adjacent interframe data correlation.
270, if there is the overlapping of at least two tracking target detection frames, its corresponding at least two tracking target is set to " overlapping " state.
After tracking target generation is overlapped, data correlation is carried out it is possible that mesh according only to the ROI information of front and back frame Target trace information corresponds to the situation of mistake.
280, judge whether overlapped tracking target cancels overlapping.
Overlapped tracking target, hands over and compares when it and be less than 0.3, indicates between the tracking target that overlaps before not It is overlapped again.
290, the detection block to cancelling the tracking target association after being overlapped is reclassified using the disaggregated model after fine tuning, is made The tracking target that must be overlapped is associated with correct detection block after overlapping, that is to say, that so that the tracking target to overlap Detection block after cancelling overlapping is correctly associated with the ROI for the former frame that tracking target overlaps.
In order to improve arithmetic speed, makes classifying quality more preferably, the disaggregated model of pre-training is finely adjusted.
Fig. 3 be one embodiment of the invention for tracking target overlapping in multiple target tracking, to occur when, it is micro- to disaggregated model The schematic flow chart of tune.
310, it is N to change the last one full articulamentum output node number.
The characteristics of needing to train sorter network study that each of overlapped tracking target occurs at this time.Disaggregated model needs Point classification and pre-training disaggregated model there are different, category of model number is the number of the tracking target to overlap.
Assuming that N number of target generation is overlapped, then it is N to change the last one full articulamentum output node number.
320, pre-training disaggregated model is loaded, to the parameter initialization of the last one full articulamentum.
The output node number for changing the last one full articulamentum of sorter network, can lead to the last one full articulamentum parameter number Mesh mismatches.Need an initial value with randomization, disaggregated model of this parameter value independent of pre-training, to the last one The parameter initialization of full articulamentum is so that it can export corresponding number of nodes.
The disaggregated model of pre-training is loaded, while the parameter of the last one full articulamentum is initialized, is finely tuned New disaggregated model afterwards.
330, with ROI re -training disaggregated model of the tracking target to overlap before overlapping.
At least two tracking targets for occurring overlapped are corresponded to constantly from starting to overlap before tracing into the i-th frame Frame, extract its corresponding whole ROI, to amended disaggregated model carry out iteration, re -training make this point several times Class model can identify well occurs overlapped tracking target.
Overlapped tracking target occurs, during the entire process of overlapping, not judges whether matching is correct.
The technical solution provided according to embodiments of the present invention, in the case of judging that tracking target generation is overlapped, Advance trained disaggregated model is finely adjusted with the overlapped pervious tracking target ROI re -trainings classification mould of generation Type;After cancelling overlapping to target, corresponding detection block is reclassified with the disaggregated model finely tuned, it is ensured that tracking target It correctly matches it and is cancelling position when being overlapped, so that it is guaranteed that the accuracy and real-time of multiple target tracking.
The alternative embodiment that any combination forms the present invention may be used, herein no longer in above-mentioned all optional technical solutions Do excessive description.
Following is apparatus of the present invention embodiment, can be used for executing the method for the present invention embodiment.For apparatus of the present invention reality Undisclosed details in example is applied, the method for the present invention embodiment is please referred to.
Fig. 4 is the block diagram according to a kind of target tracker 400 shown in an exemplary embodiment of the invention.
As shown in figure 4, the device 400 includes:Acquisition module 410 obtains multiple tracking targets for passing through target detection In the couple candidate detection frame of jth frame;Relating module 420, for by it is multiple tracking targets jth frame couple candidate detection frame with it is multiple with Track target is associated in the region of interest ROI of -1 frame of jth, obtains multiple tracking target corresponding inspections in jth frame Survey frame;Determining module 430, for determining at least two tracking targets in multiple tracking targets in the associated detection block of the i-th frame Between overlap and between the associated detection block of jth frame cancel overlapping;And sort module 440, for utilizing classification Identification model pair at least two tracks jth frame associated detection block of the target when cancelling overlapping and reclassifies, and is sent out Detection block of the tracking target of raw overlapping after jth frame reclassifies, so that the tracking target to overlap is cancelling overlapping When, the detection block after reclassifying correctly is associated with the ROI before overlapping.
Optionally, device 400 further includes:Module 405 is established, the Classification and Identification model for establishing multiple tracking targets; Update module 435, for when determining that at least two tracking targets overlap before the i-th frame between associated detection block, Using the ROI of frame of at least two tracking targets before when overlapping, Classification and Identification model, wherein sort module profit are updated It is reclassified in the associated detection block of the i-th frame with updated Classification and Identification model pair at least two tracking target.
Update module 435, is specifically used for:By the output of the last one full articulamentum of the convolutional network of Classification and Identification model The number of node is revised as the number of equal at least about two tracking targets.
Determining module 430, is specifically used for:Determine that at least two tracking targets in multiple tracking targets are associated in the i-th frame Detection block between overlap and between the associated detection block of jth frame cancel overlapping;Multiple tracking targets are calculated in jth Friendship between the associated detection block of frame simultaneously compares IOU;If determining IOU of multiple tracking targets between the associated detection block of jth frame When more than specific threshold, then there is the detection block of more than at least two tracking target to overlap;If multiple tracking targets are in jth frame When IOU between associated detection block is less than or equal to specific threshold, it is determined that the detection block of at least two tracking targets is not sent out Raw overlapping.
Relating module 420, is specifically used for:The couple candidate detection frame in jth frame and multiple tracking targets by multiple tracking targets It is associated in the region of interest ROI of -1 frame of jth;Calculate the ROI and multiple tracking that multiple tracking targets are determined in -1 frame of jth Target the couple candidate detection frame of jth frame friendship and than IOU, the IOU values that target will be each tracked in multiple tracking targets it is maximum and Couple candidate detection frame corresponding more than some specific threshold, as tracking target jth frame detection block.
Relating module 420, is additionally operable to:When the couple candidate detection frame of each of multiple tracking targets tracking target and jth frame After the completion of association, its associated detection block is deleted from couple candidate detection frame queue;For not associated successful tracking target, if Its continuous several not associated success of frame then deletes it from tracking object queue;For not associated successful couple candidate detection frame, If it continuously occurs in continuous several frames, as new tracking target, tracking object queue is added.
Acquisition module 410, is specifically used for:By target detection obtain it is multiple tracking targets jth frame couple candidate detection frame; It is labeled to application scenarios or close to several pictures of application scenarios, with object detector of the training based on deep learning.
Module 405 is established, is specifically used for:Several videos to application scenarios or close to application scenarios, using frame by frame or Several frame marks are spaced, with Classification and Identification model of the training based on deep learning.
Fig. 5 is the frame according to the computer equipment 500 for multiple target tracking shown in an exemplary embodiment of the invention Figure.
With reference to Fig. 5, device 500 includes processing component 510, further comprises one or more processors, and by depositing Memory resource representated by reservoir 520, can be by the instruction of the execution of processing component 510, such as application program for storing.It deposits The application program stored in reservoir 520 may include it is one or more each correspond to one group of instruction module.This Outside, processing component 510 is configured as executing instruction, above-mentioned to method for tracking target to execute.
Device 500 can also include that a power supply module be configured as the power management of executive device 500, one it is wired or Radio network interface is configured as device 500 being connected to network and input and output (I/O) interface.Device 500 can be grasped Make based on the operating system for being stored in memory 520, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or similar.
A kind of non-transitorycomputer readable storage medium, when the instruction in storage medium is by the processing of above-mentioned apparatus 500 When device executes so that above-mentioned apparatus 500 is able to carry out a kind of multi-object tracking method, including:It is obtained by target detection multiple Couple candidate detection frame of the tracking target in jth frame;Couple candidate detection frame by multiple tracking targets in jth frame exists with multiple tracking targets The region of interest ROI of -1 frame of jth is associated, and obtains multiple tracking target corresponding detection blocks in jth frame;It determines At least two tracking targets in multiple tracking targets overlap between the associated detection block of the i-th frame and are closed in jth frame Cancel overlapping between the detection block of connection;Target is tracked in the associated detection block of jth frame using Classification and Identification model pair at least two It is reclassified, obtains detection block of at least two tracking targets after jth frame reclassifies, so that at least two tracking Detection block of the target after jth frame reclassifies is associated at least two tracking targets in the ROI of the (i-1)-th frame, wherein i, j For positive integer and i<j.
Those of ordinary skill in the art may realize that lists described in conjunction with the examples disclosed in the embodiments of the present disclosure Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually It is implemented in hardware or software, depends on the specific application and design constraint of technical solution.Professional technician Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed The scope of the present invention.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description, The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and no longer does excessively retouch herein It states.
In several embodiments provided herein, it should be understood that disclosed systems, devices and methods, it can be with It realizes by another way.For example, the apparatus embodiments described above are merely exemplary, for example, the unit It divides, only a kind of division of logic function, formula that in actual implementation, there may be another division manner, such as multiple units or component It can be combined or can be integrated into another system, or some features can be ignored or not executed.Another point, it is shown or The mutual coupling, direct-coupling or communication connection discussed can be the indirect coupling by some interfaces, device or unit It closes or communicates to connect, can be electrical, machinery or other forms.
The unit illustrated as separating component may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, you can be located at a place, or may be distributed over multiple In network element.Some or all of unit therein can be selected according to the actual needs to realize the mesh of this embodiment scheme 's.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, it can also It is that each unit physically exists alone, it can also be during two or more units be integrated in one unit.
It, can be with if the function is realized in the form of SFU software functional unit and when sold or used as an independent product It is stored in a computer read/write memory medium.Based on this understanding, technical scheme of the present invention is substantially in other words The part of the part that contributes to existing technology or the technical solution can be expressed in the form of software products, the meter Calculation machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be People's computer, server or network equipment etc.) it performs all or part of the steps of the method described in the various embodiments of the present invention.And Storage medium above-mentioned includes:USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory The various media that can store program ver-ify code such as device (RAM, Random Access Memory), magnetic disc or CD.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain Lid is within protection scope of the present invention.Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. a kind of multi-object tracking method, which is characterized in that including:
By target detection obtain it is multiple tracking targets jth frame couple candidate detection frame;
By it is the multiple tracking target jth frame couple candidate detection frame with the multiple tracking target in the interested of -1 frame of jth Region ROI is associated, and obtains the multiple tracking target corresponding detection block in jth frame;
Determine that at least two tracking targets in the multiple tracking target overlap simultaneously between the associated detection block of the i-th frame And cancel overlapping between the associated detection block of jth frame;
At least two tracking target is reclassified in the associated detection block of jth frame using Classification and Identification model, is obtained To detection block of at least two tracking target after jth frame reclassifies, so that at least two tracking target exists Detection block after jth frame reclassifies is associated at least two tracking target in the ROI of the (i-1)-th frame, and wherein i, j are Positive integer and i<j.
2. multi-object tracking method according to claim 1, which is characterized in that further include:
Establish the Classification and Identification model of the multiple tracking target;
Determine it is described at least two tracking target overlap between the associated detection block of the i-th frame when, using it is described at least The ROI of frame of two tracking targets before when overlapping, updates the Classification and Identification model,
Wherein at least two tracking target is divided again in the associated detection block of jth frame using Classification and Identification model Class, including:
At least two tracking target is carried out again in the associated detection block of jth frame using updated Classification and Identification model Classification.
3. multi-object tracking method according to claim 1 or 2, which is characterized in that the multiple tracking mesh of determination At least two tracking targets in mark overlap between the associated detection block of the i-th frame and in the associated detection block of jth frame Between cancel overlapping, including:
It calculates the multiple friendship for tracking target between the associated detection block of jth frame and compares IOU;
If IOU of the multiple tracking target between the associated detection block of jth frame is more than specific threshold, there is at least two The detection block of the multiple tracking target overlaps,
If IOU of the multiple tracking target between the associated detection block of jth frame is less than or equal to specific threshold, it is determined that The detection block of at least two the multiple tracking target does not overlap.
4. multi-object tracking method according to claim 1 or 2, which is characterized in that described by the multiple tracking target It is associated in the region of interest ROI of -1 frame of jth with the multiple tracking target in the couple candidate detection frame of jth frame, including:
Calculate ROI that the multiple tracking target is determined in -1 frame of jth with it is the multiple track target jth frame couple candidate detection The friendship of frame and than IOU, the IOU values that target will be each tracked in the multiple tracking target are maximum and are more than some specific threshold pair The couple candidate detection frame answered, as it is described tracking target jth frame detection block.
5. multi-object tracking method according to claim 4, which is characterized in that further include:
After the completion of each of the multiple tracking target tracking target is associated with the couple candidate detection frame of jth frame, it is associated with Detection block deleted from couple candidate detection frame queue;
For not associated successful tracking target, if its continuous several not associated success of frame, by it from tracking object queue It deletes;
For not associated successful couple candidate detection frame, if it continuously occurs in continuous several frames, as new tracking mesh Mark, is added the tracking object queue.
6. a kind of multiple target tracking device, which is characterized in that including:
Acquisition module, for by target detection obtain it is multiple tracking targets jth frame couple candidate detection frame;
Relating module, for by the multiple tracking target jth frame couple candidate detection frame and the multiple tracking target the The region of interest ROI of j-1 frames is associated, and obtains the multiple tracking target corresponding detection block in jth frame;
Determining module, for determining at least two tracking targets in the multiple tracking target in the associated detection block of the i-th frame Between overlap and between the associated detection block of jth frame cancel overlapping;
Sort module, for using Classification and Identification model to it is described at least two tracking target the associated detection block of jth frame into Row reclassifies, and detection block of at least two tracking target after jth frame reclassifies is obtained, so that described at least two Detection block of a tracking target after jth frame reclassifies is closed at least two tracking target in the ROI of the (i-1)-th frame Connection, wherein i, j are positive integer and i<j.
7. multiple target tracking device according to claim 6, which is characterized in that further include:
Module is established, the Classification and Identification model for establishing the multiple tracking target;
Update module, for determining that it is heavy that at least two tracking target occurs before the i-th frame between associated detection block When folded, the ROI for tracking frame of the target before when overlapping using described at least two updates the Classification and Identification model, The wherein described sort module is using updated Classification and Identification model at least two tracking target in the associated inspection of the i-th frame Frame is surveyed to be reclassified.
8. the multiple target tracking device described according to claim 6 or 7, which is characterized in that the determining module is specifically used for:
Determine that at least two tracking targets in the multiple tracking target overlap simultaneously between the associated detection block of the i-th frame And cancel overlapping between the associated detection block of jth frame;
It calculates the multiple friendship for tracking target between the associated detection block of jth frame and compares IOU;
Determine that if IOU of the multiple tracking target between the associated detection block of jth frame is more than specific threshold, have at least The detection block of two the multiple tracking targets overlaps;If the multiple tracking target the associated detection block of jth frame it Between IOU be less than or equal to specific threshold when, it is determined that it is described at least two it is the multiple tracking target detection blocks do not occur Overlapping.
9. the multiple target tracking device described according to claim 6 or 7, which is characterized in that the relating module is specifically used for:
By it is the multiple tracking target jth frame couple candidate detection frame with the multiple tracking target in the interested of -1 frame of jth Region ROI is associated;ROI and the multiple tracking target that the multiple tracking target is determined in -1 frame of jth are calculated in jth The friendship of the couple candidate detection frame of frame and than IOU, the IOU values that target will be each tracked in the multiple tracking target are maximum and big Mr. Yu The corresponding couple candidate detection frame of a specific threshold, as the tracking target jth frame detection block.
10. multiple target tracking device according to claim 9, which is characterized in that the relating module is additionally operable to:
After the completion of each of the multiple tracking target tracking target is associated with the couple candidate detection frame of jth frame, it is associated with Detection block deleted from couple candidate detection frame queue;
For not associated successful tracking target, if its continuous several not associated success of frame, by it from tracking object queue It deletes;
For not associated successful couple candidate detection frame, if it continuously occurs in continuous several frames, as new tracking mesh Mark, is added the tracking object queue.
CN201810069852.4A 2018-01-24 2018-01-24 Multi-target tracking method and device Active CN108470332B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810069852.4A CN108470332B (en) 2018-01-24 2018-01-24 Multi-target tracking method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810069852.4A CN108470332B (en) 2018-01-24 2018-01-24 Multi-target tracking method and device

Publications (2)

Publication Number Publication Date
CN108470332A true CN108470332A (en) 2018-08-31
CN108470332B CN108470332B (en) 2023-07-07

Family

ID=63266144

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810069852.4A Active CN108470332B (en) 2018-01-24 2018-01-24 Multi-target tracking method and device

Country Status (1)

Country Link
CN (1) CN108470332B (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109087510A (en) * 2018-09-29 2018-12-25 讯飞智元信息科技有限公司 traffic monitoring method and device
CN109215059A (en) * 2018-10-16 2019-01-15 西安建筑科技大学 Local data's correlating method of moving vehicle tracking in a kind of video of taking photo by plane
CN109447121A (en) * 2018-09-27 2019-03-08 清华大学 A kind of Visual Sensor Networks multi-object tracking method, apparatus and system
CN109615641A (en) * 2018-11-23 2019-04-12 中山大学 Multiple target pedestrian tracking system and tracking based on KCF algorithm
CN109784173A (en) * 2018-12-14 2019-05-21 合肥阿巴赛信息科技有限公司 A kind of shop guest's on-line tracking of single camera
CN109934849A (en) * 2019-03-08 2019-06-25 西北工业大学 Online multi-object tracking method based on track metric learning
CN109977906A (en) * 2019-04-04 2019-07-05 睿魔智能科技(深圳)有限公司 Gesture identification method and system, computer equipment and storage medium
CN110047095A (en) * 2019-03-06 2019-07-23 平安科技(深圳)有限公司 Tracking, device and terminal device based on target detection
CN110210304A (en) * 2019-04-29 2019-09-06 北京百度网讯科技有限公司 Method and system for target detection and tracking
CN111354023A (en) * 2020-03-09 2020-06-30 中振同辂(江苏)机器人有限公司 Camera-based visual multi-target tracking method
CN111402288A (en) * 2020-03-26 2020-07-10 杭州博雅鸿图视频技术有限公司 Target detection tracking method and device
CN111415461A (en) * 2019-01-08 2020-07-14 虹软科技股份有限公司 Article identification method and system and electronic equipment
CN111445501A (en) * 2020-03-25 2020-07-24 苏州科达科技股份有限公司 Multi-target tracking method, device and storage medium
CN111489284A (en) * 2019-01-29 2020-08-04 北京搜狗科技发展有限公司 Image processing method and device for image processing
CN111652902A (en) * 2020-06-02 2020-09-11 浙江大华技术股份有限公司 Target tracking detection method, electronic equipment and device
CN111986228A (en) * 2020-09-02 2020-11-24 华侨大学 Pedestrian tracking method, device and medium based on LSTM model escalator scene
CN112037256A (en) * 2020-08-17 2020-12-04 中电科新型智慧城市研究院有限公司 Target tracking method and device, terminal equipment and computer readable storage medium
CN112215053A (en) * 2019-07-12 2021-01-12 通用汽车环球科技运作有限责任公司 Multi-sensor multi-object tracking
CN113409359A (en) * 2021-06-25 2021-09-17 之江实验室 Multi-target tracking method based on feature aggregation
CN114022803A (en) * 2021-09-30 2022-02-08 苏州浪潮智能科技有限公司 Multi-target tracking method and device, storage medium and electronic equipment
WO2022127876A1 (en) * 2020-12-16 2022-06-23 影石创新科技股份有限公司 Target tracking method, computer-readable storage medium, and computer device
CN116091552A (en) * 2023-04-04 2023-05-09 上海鉴智其迹科技有限公司 Target tracking method, device, equipment and storage medium based on deep SORT
CN116597417A (en) * 2023-05-16 2023-08-15 北京斯年智驾科技有限公司 Obstacle movement track determining method, device, equipment and storage medium

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102214291A (en) * 2010-04-12 2011-10-12 云南清眸科技有限公司 Method for quickly and accurately detecting and tracking human face based on video sequence
CN103259962A (en) * 2013-04-17 2013-08-21 深圳市捷顺科技实业股份有限公司 Target tracking method and related device
CN104732187A (en) * 2013-12-18 2015-06-24 杭州华为企业通信技术有限公司 Method and equipment for image tracking processing
CN104834916A (en) * 2015-05-14 2015-08-12 上海太阳能科技有限公司 Multi-face detecting and tracking method
CN106097391A (en) * 2016-06-13 2016-11-09 浙江工商大学 A kind of multi-object tracking method identifying auxiliary based on deep neural network
US20160328613A1 (en) * 2015-05-05 2016-11-10 Xerox Corporation Online domain adaptation for multi-object tracking
EP3096292A1 (en) * 2015-05-18 2016-11-23 Xerox Corporation Multi-object tracking with generic object proposals
US20170061229A1 (en) * 2015-09-01 2017-03-02 Sony Corporation Method and system for object tracking
CN106934817A (en) * 2017-02-23 2017-07-07 中国科学院自动化研究所 Based on multiattribute multi-object tracking method and device
CN106971401A (en) * 2017-03-30 2017-07-21 联想(北京)有限公司 Multiple target tracking apparatus and method
CN107066990A (en) * 2017-05-04 2017-08-18 厦门美图之家科技有限公司 A kind of method for tracking target and mobile device
CN107122735A (en) * 2017-04-26 2017-09-01 中山大学 A kind of multi-object tracking method based on deep learning and condition random field
US20170286774A1 (en) * 2016-04-04 2017-10-05 Xerox Corporation Deep data association for online multi-class multi-object tracking
CN107403175A (en) * 2017-09-21 2017-11-28 昆明理工大学 Visual tracking method and Visual Tracking System under a kind of movement background

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102214291A (en) * 2010-04-12 2011-10-12 云南清眸科技有限公司 Method for quickly and accurately detecting and tracking human face based on video sequence
CN103259962A (en) * 2013-04-17 2013-08-21 深圳市捷顺科技实业股份有限公司 Target tracking method and related device
CN104732187A (en) * 2013-12-18 2015-06-24 杭州华为企业通信技术有限公司 Method and equipment for image tracking processing
US20160328613A1 (en) * 2015-05-05 2016-11-10 Xerox Corporation Online domain adaptation for multi-object tracking
CN104834916A (en) * 2015-05-14 2015-08-12 上海太阳能科技有限公司 Multi-face detecting and tracking method
EP3096292A1 (en) * 2015-05-18 2016-11-23 Xerox Corporation Multi-object tracking with generic object proposals
US20170061229A1 (en) * 2015-09-01 2017-03-02 Sony Corporation Method and system for object tracking
US20170286774A1 (en) * 2016-04-04 2017-10-05 Xerox Corporation Deep data association for online multi-class multi-object tracking
CN106097391A (en) * 2016-06-13 2016-11-09 浙江工商大学 A kind of multi-object tracking method identifying auxiliary based on deep neural network
CN106934817A (en) * 2017-02-23 2017-07-07 中国科学院自动化研究所 Based on multiattribute multi-object tracking method and device
CN106971401A (en) * 2017-03-30 2017-07-21 联想(北京)有限公司 Multiple target tracking apparatus and method
CN107122735A (en) * 2017-04-26 2017-09-01 中山大学 A kind of multi-object tracking method based on deep learning and condition random field
CN107066990A (en) * 2017-05-04 2017-08-18 厦门美图之家科技有限公司 A kind of method for tracking target and mobile device
CN107403175A (en) * 2017-09-21 2017-11-28 昆明理工大学 Visual tracking method and Visual Tracking System under a kind of movement background

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109447121B (en) * 2018-09-27 2020-11-06 清华大学 Multi-target tracking method, device and system for visual sensor network
CN109447121A (en) * 2018-09-27 2019-03-08 清华大学 A kind of Visual Sensor Networks multi-object tracking method, apparatus and system
CN109087510B (en) * 2018-09-29 2021-09-07 讯飞智元信息科技有限公司 Traffic monitoring method and device
CN109087510A (en) * 2018-09-29 2018-12-25 讯飞智元信息科技有限公司 traffic monitoring method and device
CN109215059A (en) * 2018-10-16 2019-01-15 西安建筑科技大学 Local data's correlating method of moving vehicle tracking in a kind of video of taking photo by plane
CN109215059B (en) * 2018-10-16 2021-06-29 西安建筑科技大学 Local data association method for tracking moving vehicle in aerial video
CN109615641A (en) * 2018-11-23 2019-04-12 中山大学 Multiple target pedestrian tracking system and tracking based on KCF algorithm
CN109615641B (en) * 2018-11-23 2022-11-29 中山大学 Multi-target pedestrian tracking system and tracking method based on KCF algorithm
CN109784173A (en) * 2018-12-14 2019-05-21 合肥阿巴赛信息科技有限公司 A kind of shop guest's on-line tracking of single camera
US11335092B2 (en) 2019-01-08 2022-05-17 Arcsoft Corporation Limited Item identification method, system and electronic device
CN111415461B (en) * 2019-01-08 2021-09-28 虹软科技股份有限公司 Article identification method and system and electronic equipment
CN111415461A (en) * 2019-01-08 2020-07-14 虹软科技股份有限公司 Article identification method and system and electronic equipment
CN111489284B (en) * 2019-01-29 2024-02-06 北京搜狗科技发展有限公司 Image processing method and device for image processing
CN111489284A (en) * 2019-01-29 2020-08-04 北京搜狗科技发展有限公司 Image processing method and device for image processing
CN110047095B (en) * 2019-03-06 2023-07-21 平安科技(深圳)有限公司 Tracking method and device based on target detection and terminal equipment
CN110047095A (en) * 2019-03-06 2019-07-23 平安科技(深圳)有限公司 Tracking, device and terminal device based on target detection
CN109934849A (en) * 2019-03-08 2019-06-25 西北工业大学 Online multi-object tracking method based on track metric learning
CN109977906B (en) * 2019-04-04 2021-06-01 睿魔智能科技(深圳)有限公司 Gesture recognition method and system, computer device and storage medium
CN109977906A (en) * 2019-04-04 2019-07-05 睿魔智能科技(深圳)有限公司 Gesture identification method and system, computer equipment and storage medium
CN110210304A (en) * 2019-04-29 2019-09-06 北京百度网讯科技有限公司 Method and system for target detection and tracking
CN112215053B (en) * 2019-07-12 2023-09-19 通用汽车环球科技运作有限责任公司 Multi-sensor multi-object tracking
CN112215053A (en) * 2019-07-12 2021-01-12 通用汽车环球科技运作有限责任公司 Multi-sensor multi-object tracking
CN111354023A (en) * 2020-03-09 2020-06-30 中振同辂(江苏)机器人有限公司 Camera-based visual multi-target tracking method
WO2021189825A1 (en) * 2020-03-25 2021-09-30 苏州科达科技股份有限公司 Multi-target tracking method and apparatus, and storage medium
CN111445501A (en) * 2020-03-25 2020-07-24 苏州科达科技股份有限公司 Multi-target tracking method, device and storage medium
CN111402288A (en) * 2020-03-26 2020-07-10 杭州博雅鸿图视频技术有限公司 Target detection tracking method and device
CN111652902A (en) * 2020-06-02 2020-09-11 浙江大华技术股份有限公司 Target tracking detection method, electronic equipment and device
CN112037256A (en) * 2020-08-17 2020-12-04 中电科新型智慧城市研究院有限公司 Target tracking method and device, terminal equipment and computer readable storage medium
CN111986228A (en) * 2020-09-02 2020-11-24 华侨大学 Pedestrian tracking method, device and medium based on LSTM model escalator scene
CN111986228B (en) * 2020-09-02 2023-06-02 华侨大学 Pedestrian tracking method, device and medium based on LSTM model escalator scene
WO2022127876A1 (en) * 2020-12-16 2022-06-23 影石创新科技股份有限公司 Target tracking method, computer-readable storage medium, and computer device
CN113409359A (en) * 2021-06-25 2021-09-17 之江实验室 Multi-target tracking method based on feature aggregation
CN114022803B (en) * 2021-09-30 2023-11-14 苏州浪潮智能科技有限公司 Multi-target tracking method and device, storage medium and electronic equipment
CN114022803A (en) * 2021-09-30 2022-02-08 苏州浪潮智能科技有限公司 Multi-target tracking method and device, storage medium and electronic equipment
CN116091552A (en) * 2023-04-04 2023-05-09 上海鉴智其迹科技有限公司 Target tracking method, device, equipment and storage medium based on deep SORT
CN116597417A (en) * 2023-05-16 2023-08-15 北京斯年智驾科技有限公司 Obstacle movement track determining method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN108470332B (en) 2023-07-07

Similar Documents

Publication Publication Date Title
CN108470332A (en) A kind of multi-object tracking method and device
Angah et al. Tracking multiple construction workers through deep learning and the gradient based method with re-matching based on multi-object tracking accuracy
Soomro et al. Action localization in videos through context walk
CN108549846B (en) Pedestrian detection and statistics method combining motion characteristics and head-shoulder structure
Patruno et al. People re-identification using skeleton standard posture and color descriptors from RGB-D data
CN107346538A (en) Method for tracing object and equipment
Riffo et al. Threat objects detection in x-ray images using an active vision approach
Liu et al. Hand Gesture Recognition Based on Single‐Shot Multibox Detector Deep Learning
Acharya et al. Ai-enabled droplet detection and tracking for agricultural spraying systems
CN106056627B (en) A kind of robust method for tracking target based on local distinctive rarefaction representation
Muthu et al. Motion segmentation of rgb-d sequences: Combining semantic and motion information using statistical inference
Mazzetto et al. Deep learning models for visual inspection on automotive assembling line
CN110427912A (en) A kind of method for detecting human face and its relevant apparatus based on deep learning
Weng et al. Whose track is it anyway? improving robustness to tracking errors with affinity-based trajectory prediction
Li et al. Visual slam in dynamic scenes based on object tracking and static points detection
CN114627339B (en) Intelligent recognition tracking method and storage medium for cross border personnel in dense jungle area
Kim et al. Online multiple object tracking based on open-set few-shot learning
Zhao et al. Dpit: Dual-pipeline integrated transformer for human pose estimation
Dhore et al. Human Pose Estimation And Classification: A Review
Yang et al. Probabilistic projective association and semantic guided relocalization for dense reconstruction
Dhassi et al. Visual tracking based on adaptive interacting multiple model particle filter by fusing multiples cues
CN109615641A (en) Multiple target pedestrian tracking system and tracking based on KCF algorithm
Rasol et al. N-fold Bernoulli probability based adaptive fast-tracking algorithm and its application to autonomous aerial refuelling
Li et al. Group-skeleton-based human action recognition in complex events
Dong et al. ESA-Net: An efficient scale-aware network for small crop pest detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant