CN106327502A - Multi-scene multi-target recognition and tracking method in security video - Google Patents

Multi-scene multi-target recognition and tracking method in security video Download PDF

Info

Publication number
CN106327502A
CN106327502A CN201610805509.2A CN201610805509A CN106327502A CN 106327502 A CN106327502 A CN 106327502A CN 201610805509 A CN201610805509 A CN 201610805509A CN 106327502 A CN106327502 A CN 106327502A
Authority
CN
China
Prior art keywords
destination object
region
tracking
color histogram
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610805509.2A
Other languages
Chinese (zh)
Inventor
张海霞
金蕾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN201610805509.2A priority Critical patent/CN106327502A/en
Publication of CN106327502A publication Critical patent/CN106327502A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • G06V10/507Summing image-intensity values; Histogram projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/513Sparse representations

Abstract

The invention relates to a multi-scene multi-target recognition and tracking method in a security video. The method comprises specific steps: (1) an image frame containing a target object is acquired from the video; (2) features of the target object are extracted, wherein the features comprise a color histogram and a hash code of the target object; (3) as for a subsequent video, the target object is detected through matching with the features of the target object, if the target object is detected, a step (4) is carried out, or otherwise, the step (3) is repeated; and (4) the target object is tracked. The color histogram and the hash code which serve as the target features occupy different weights and are mixed for detection and recognition, errors caused by single-feature detection can be avoided, a dynamically-updated discriminant sparse similarity map method ensures the accuracy and the anti-interference ability of the tracking result, multi-scene multi-target detection, recognition and tracking are finally realized, the accuracy and the robustness are improved obviously, and good effects are realized in a complicated background environment.

Description

Many scenes multi-targets recognition and tracking in a kind of security protection video
Technical field
The present invention relates to many scenes multi-targets recognition and tracking in a kind of security protection video, belong to Digital Image Processing and Technical field of computer vision.
Background technology
Extensive target following in video surveillance network has become the study hotspot of computer vision field, and extensively It is applied to traffic administration, digital supervision, intelligent city etc..But, the factor such as the change of light and complicated background can affect mesh Mark tracking performance.
In order to realize high tracking accuracy, Recent study personnel propose substantial amounts of algorithm.Based on target pattern coupling search Traditional method using destination object follow the tracks of as a local mode matching optimization problem.As target pattern matching pursuit algorithm Classical way, mean shift algorithm has Fast Convergent characteristic, it is adaptable to real-time tracking, but, the change in size to target Respond the weak development limiting the method.Destination object is followed the tracks of and is converted into shellfish by another traditional method based on filter theory This Theoretical Framework of leaf, it uses prior probability to predict the maximum a posteriori probability of destination object.Kalman filtering algorithm is processing Linearly, gaussian sum single model following task time effect preferable, and particle filter algorithm is applicable to non-linear, non-gaussian with Track.These algorithms have certain beneficial effect, but there is also the defect of self, need to join flexibly according to practical application Put.It is known that tracking accuracy depends greatly on the feature extracted from destination object, and effectively retouching feature Stating is the key of target following.
Nowadays security protection video monitoring network spreads all over, and has every day substantial amounts of security protection video data to be stored, but few people These data are analyzed and process.In general, in safety monitoring video, target interested is often one or many One or more pedestrian targets in video are identified and tracking have great practical value by individual, the most right Multiple targets of big quantity are identified and follow the tracks of is the main direction of studying of research worker.Present stage, entered by human attendance The not only waste of manpower of row safety monitoring, and often ignore some targets.By area of computer aided, image is processed, Carry out multi-targets recognition and follow the tracks of the defect that can make up manual observation, it is possible to realizing real-time prompting, early warning and information and upload, no Only economize on resources, and accurately, convenient, fast.
Summary of the invention
For the deficiencies in the prior art, the invention provides many scenes multi-targets recognition and track side in a kind of security protection video Method;
Term is explained
Differentiate reverse rarefaction representation, i.e. Discriminative Reverse Sparse Representation.
The technical scheme is that
Many scenes multi-targets recognition and tracking in a kind of security protection video, concrete steps include:
(1) from video, the picture frame containing destination object is gathered;
(2) feature of destination object is extracted, including color histogram and the Hash coding of destination object;If one Destination object, then color histogram and the Hash of this destination object of extracting directly encodes, if two or more destination objects, Then destination object is numbered, extracts color histogram and the Hash coding of each destination object after numbering;
(3) to subsequent video, mated by the feature with destination object, detected target object, if be detected that mesh Mark object, enters step (4), otherwise, repeated execution of steps (3);
(4) destination object is tracked.
According to currently preferred, described step (3), detected target object, concrete steps include:
A, moving region detection: with moving region in frame difference method detection video, if be detected that comprise the fortune of group Dynamic region, enters step b, if be detected that only comprise the region of a people, enters step c;Owing to image background is complicated, The moving region detected may comprise group;
B, with crowd's partitioning algorithm the difference detecting the moving region comprising group to be only divided into comprise a people Region, i.e. several single regions;
C, the color histogram extracting the region only comprising a people and Hash coding, with the color histogram of destination object Mating respectively with Hash coding, respectively obtain Pasteur's distance and Hamming distance, the weight of Pasteur's distance is a, Hamming distance Weight be b, be used for representing the similarity of region and the destination object only comprising a people;Pasteur's distance only comprises for representing The color histogram in the region of one people and the similarity of the color histogram of destination object, Hamming distance only comprises for representing The similarity that the Hash coding in the region of one people encodes with the Hash of destination object, the span of a is taking of 60-80%, b Value scope is 20-40%, a+b=100%;
D, when the region Yu destination object only comprising a people similarity more than or equal to threshold value c time, it is determined that only comprise one Containing destination object in the region of individual, targeted object region rectangle frame is outlined, otherwise, it is determined that only comprise the district of a people Territory does not contains destination object;The span of threshold value c is 70%-90%.
According to currently preferred, a=70%, b=30%, c=80%.
According to currently preferred, described color histogram, refer to the ratio that different color is shared in entire image, i.e. Color image gamut value is carried out interval division, the number of pixels accounting in each interval is added up.
According to currently preferred, described Hash encodes, through the following steps that obtain:
E, the high fdrequency component of removal image;
F, picture size is reduced to 8 × 8, containing 64 pixels;
G, by 8 × 8 dimensional drawings as being converted into gray level image, and calculate the average gray value of 64 pixels;
H, by the average gray value of 64 pixels and 8 × 8 dimensional drawings as in each grey scale pixel value compare, if pixel Gray value less than average gray value, the encoded radio of this pixel is 0, and otherwise encoded radio is 1;
I, all grey scale pixel values are compared with average gray value after, 64 obtained be encoded to Hash coding.Permissible The feature of this image of accurate description.
Image can be counted as comprising the 2D signal of different frequency component, and the high fdrequency component of this signal represents change acutely Region, such as the edge of object, and it can describe the details of image well.Low frequency component can describe the knot of image Structure, Hash coding mainly utilizes the low-frequency information of image.
According to currently preferred, described step (2), specifically refer to: manually determine in collection image according to coordinate points Targeted object region, extracts the feature of destination object, including color histogram and the Hash coding of destination object.Thus obtain The characteristic model of destination object.
According to currently preferred, described step (4), use and differentiate that sparse similarity graph method (DSS Map) is to target pair As being tracked, specifically refer to:
J, initially differentiate template set: assume that (h v) is the central point of destination object minimum rectangular area, described minimum square to Q Shape region is the minimum rectangle image-region comprising certain destination object;(h v) refers to Q (h, coordinate figure v);With Q (h, v) Centered by border circular areas in, this border circular areas radius meet Value be positive number, and be not more than / 2nd of the shorter length of side of little rectangular area, take p sample image block as initial positive template storehouse, QiIt it is i-th The central point of sample image block, 1≤i≤p;Meet from radiusAnnular region in, sampling n Image block, obtains original negative template base, QjIt is the central point of jth image block,It is the interior outer half of annular region with ω Footpath;ω is not more than 1/2nd of the shorter length of side of minimum rectangular area.
K, differentiate reverse rarefaction representation: differentiate sparse similarity graph matrix represent all candidate target objects and template set it Between relation, as shown in formula I:
arg m i n | | T - Y C | | 2 2 + λ Σ i | | c i | | 1 s . t . c i ≥ 0 , i = 1 , 2 , 3 ... ... ( p + n ) . - - - ( I )
In formula I, C is for differentiating sparse similarity graph matrix, and T is template set, including initial positive template storehouse and initial negative norm Plate storehouse, Y is candidate target object.
In differentiating sparse similarity map algorithm, tracking problem is seen as finding out in candidate region and target area The highest region of similarity is as target area to be followed the tracks of, when carrying out Similarity Measure, use differentiate reverse sparse similar Degree method for expressing.The method clearly describes the relation between candidate region and target area, and it is based on multitask Reverse sparse expression formula in optimization solution set up.Wherein, it is right based on multitask reverse sparse expression formula Whole candidate region carries out the search of multiple subset and multiple samples of minimum error are provided in reconstruction.
APG algorithm is used to obtain optimal solution by successive ignition.In this process, multiple candidate regions energy The enough calculating simultaneously carrying out similarity and need not single-threaded calculating one by one, therefore significantly improve the efficiency of tracking. In this algorithm, finally to extract discriminant information in mapping differentiating sparse similarity, be used for finding out from candidate region and want The target area followed the tracks of.Constantly following the tracks of in evaluation process, the candidate target most like with destination object adds positive template storehouse, The candidate target excessive with its gap adds negative template base, and this Real-time and Dynamic updates the process of positive and negative template base and made enough Many discriminant informations are used for following the tracks of, and discriminant information can be stored in the sparse similar mapping of new differentiation, is greatly improved The accuracy followed the tracks of, has a good effect.
The invention have the benefit that
Present invention is generally directed to Computer Vision algorithm be designed, security protection video is carried out computer automatic analysis. In the method, color histogram and Hash coding divide as target characteristic and to account for different weight mixing and carry out detection and identify, it is to avoid The error of single features detection, and dynamically update differentiate that sparse similarity graph method ensure that and follow the tracks of the accuracy of result and anti- Interference performance, finally realizes detection multiobject to many scenes, identifies and follow the tracks of, hence it is evident that improve accuracy and robustness, Complex background environment also has good effect.The research that the method is follow-up lays the first stone.Area of computer aided multi-targets recognition Defect with tracking can make up manual observation, achieves real-time information prompting in safety monitoring works, not only economizes on resources, And it is accurate, convenient, fast.
Accompanying drawing explanation
Fig. 1 is many scenes multi-targets recognition and the schematic flow sheet of tracking in a kind of security protection video of the present invention.
Detailed description of the invention
Below in conjunction with Figure of description and embodiment, the present invention is further qualified, but is not limited to this.
Embodiment
Many scenes multi-targets recognition and tracking in a kind of security protection video, as it is shown in figure 1, concrete steps include:
(1) from video, the picture frame containing destination object is gathered;In general, target pair interested in security protection video As being mostly one or more people, just it is embodied as being described to the present invention using people as destination object.Determine and want After the destination object followed the tracks of, gathering the image containing destination object, including wanting in video, the destination object followed the tracks of is the most complete The image of that frame occurred and ensuing 49 two field pictures.
(2) gathering destination object training sample set, extract the feature of destination object, concrete steps include:
The image that step (1) obtains comprises destination object, can be multiple destination object, it would be desirable to identify and follow the tracks of Destination object outlines, and the most artificially determines targeted object region, and obtains the coordinate on four summits, targeted object region;
Extract the feature of destination object, encode two kinds of features including color histogram and Hash, be used for extracting target pair The image encoded as color histogram and Hash is the mesh being partitioned into according to apex coordinate from each frame (50 frame) extracted Mark subject area image carries out picture unification, the summation of correspondence position pixel color thresholding the image averagely obtained.If known simultaneously Not with the multiple targets of tracking, need target is numbered, be respectively processed, obtain the corresponding characteristic model of each target. The color histogram and the Hash that obtain each target encode the color histogram graph model respectively as destination object and Hash coding Model.
Described color histogram, refers to the ratio that different color is shared in entire image, i.e. to color image gamut value Carry out interval division, the number of pixels accounting in each interval is added up.
Described Hash encodes, through the following steps that obtain:
E, the high fdrequency component of removal image;
F, picture size is reduced to 8 × 8, containing 64 pixels;
G, by 8 × 8 dimensional drawings as being converted into gray level image, and calculate the average gray value of 64 pixels;
H, by the average gray value of 64 pixels and 8 × 8 dimensional drawings as in each grey scale pixel value compare, if pixel Gray value less than average gray value, the encoded radio of this pixel is 0, and otherwise encoded radio is 1;
I, all grey scale pixel values are compared with average gray value after, 64 obtained be encoded to Hash coding.Permissible The feature of this image of accurate description.
Image can be counted as comprising the 2D signal of different frequency component, and the high fdrequency component of this signal represents change acutely Region, such as the edge of object, and it can describe the details of image well.Low frequency component can describe the knot of image Structure, Hash coding mainly utilizes the low-frequency information of image.
(3) to subsequent video, mated by the feature with destination object, detected target object, if be detected that mesh Mark object, enters step (4), otherwise, repeated execution of steps (3);
Described step (3), detected target object, concrete steps include:
A, moving region detection: with moving region in frame difference method detection video, if be detected that comprise the fortune of group Dynamic region, enters step b, if be detected that only comprise the region of a people, enters step c;Owing to image background is complicated, The moving region detected may comprise group;
B, with crowd's partitioning algorithm the difference detecting the moving region comprising group to be only divided into comprise a people Region, i.e. several single regions;
C, the color histogram extracting the region only comprising a people and Hash coding, with the color histogram of destination object Mating respectively with Hash coding, respectively obtain Pasteur's distance and Hamming distance, the weight of Pasteur's distance is a, Hamming distance Weight be b, be used for representing the similarity of region and the destination object only comprising a people;Pasteur's distance only comprises for representing The color histogram in the region of one people and the similarity of the color histogram of destination object, Hamming distance only comprises for representing The similarity that the Hash coding in the region of one people encodes with the Hash of destination object, a=70%, b=30%;
D, when the region Yu destination object only comprising a people similarity more than or equal to threshold value c time, it is determined that only comprise one Containing destination object in the region of individual, targeted object region rectangle frame is outlined, otherwise, it is determined that only comprise the district of a people Territory does not contains destination object;Threshold value c=80%.
(4) destination object is tracked.Use differentiate sparse similarity graph method (DSS Map) destination object is carried out with Track, specifically refers to:
J, initially differentiate template set: assume that (h v) is the central point of destination object minimum rectangular area, described minimum square to Q Shape region is the minimum rectangle image-region comprising certain destination object;(h v) refers to Q (h, coordinate figure v);With Q (h, v) Centered by border circular areas in, this border circular areas radius meet Value be positive number, and be not more than / 2nd of the shorter length of side of little rectangular area, take p sample image block as initial positive template storehouse, QiIt it is i-th The central point of sample image block, 1≤i≤p;Meet from radiusAnnular region in, sampling n Image block, obtains original negative template base, QjIt is the central point of jth image block,It is the interior outer half of annular region with ω Footpath;ω is not more than 1/2nd of the shorter length of side of minimum rectangular area.
K, differentiate reverse rarefaction representation: differentiate sparse similarity graph matrix represent all candidate target objects and template set it Between relation, as shown in formula I:
arg m i n | | T - Y C | | 2 2 + λ Σ i | | c i | | 1 s . t . c i ≥ 0 , i = 1 , 2 , 3 ... ... ( p + n ) . - - - ( I )
In formula I, C is for differentiating sparse similarity graph matrix, and T is template set, including initial positive template storehouse and initial negative norm Plate storehouse, Y is candidate target object.
In differentiating sparse similarity map algorithm, tracking problem is seen as finding out in candidate region and target area The highest region of similarity is as target area to be followed the tracks of, when carrying out Similarity Measure, use differentiate reverse sparse similar Degree method for expressing.The method clearly describes the relation between candidate region and target area, and it is based on multitask Reverse sparse expression formula in optimization solution set up.Wherein, it is right based on multitask reverse sparse expression formula Whole candidate region carries out the search of multiple subset and multiple samples of minimum error are provided in reconstruction.
APG algorithm is used to obtain optimal solution by successive ignition.In this process, multiple candidate regions energy The enough calculating simultaneously carrying out similarity and need not single-threaded calculating one by one, therefore significantly improve the efficiency of tracking. In this algorithm, finally to extract discriminant information in mapping differentiating sparse similarity, be used for finding out from candidate region and want The target area followed the tracks of.Constantly following the tracks of in evaluation process, the candidate target most like with destination object adds positive template storehouse, The candidate target excessive with its gap adds negative template base, and this Real-time and Dynamic updates the process of positive and negative template base and made enough Many discriminant informations are used for following the tracks of, and discriminant information can be stored in the sparse similar mapping of new differentiation, is greatly improved The accuracy followed the tracks of, has a good effect.

Claims (7)

1. many scenes multi-targets recognition and tracking in a security protection video, it is characterised in that concrete steps include:
(1) from video, the picture frame containing destination object is gathered;
(2) feature of destination object is extracted, including color histogram and the Hash coding of destination object;If a target Object, then color histogram and the Hash of this destination object of extracting directly encodes, if two or more destination objects, the most right Destination object is numbered, and extracts color histogram and the Hash coding of each destination object after numbering;
(3) to subsequent video, mated by the feature with destination object, detected target object, if be detected that target pair As, enter step (4), otherwise, repeated execution of steps (3);
(4) destination object is tracked.
Many scenes multi-targets recognition and tracking in a kind of security protection video the most according to claim 1, it is characterised in that Described step (3), detected target object, concrete steps include:
A, moving region detection: with moving region in frame difference method detection video, if be detected that comprise the motor region of group Territory, enters step b, if be detected that only comprise the region of a people, enters step c;
B, with crowd's partitioning algorithm the zones of different detecting the moving region comprising group to be only divided into comprise a people, Several single regions i.e.;
C, the color histogram extracting the region only comprising a people and Hash coding, with color histogram and the Kazakhstan of destination object Uncommon coding mates respectively, respectively obtains Pasteur's distance and Hamming distance, and the weight of Pasteur's distance is a, the power of Hamming distance It is heavily b, is used for representing the similarity of region and the destination object only comprising a people;Pasteur's distance only comprises one for representing The color histogram in the region of people and the similarity of the color histogram of destination object, Hamming distance only comprises one for representing The similarity that the Hash coding in the region of people encodes with the Hash of destination object, the span of a is the value model of 60-80%, b Enclose for 20-40%, a+b=100%;
D, when the region Yu destination object only comprising a people similarity more than or equal to threshold value c time, it is determined that only comprise a people Region in containing destination object, targeted object region rectangle frame is outlined, otherwise, it is determined that only comprise in the region of a people Do not contain destination object;The span of threshold value c is 70%-90%.
Many scenes multi-targets recognition and tracking in a kind of security protection video the most according to claim 2, it is characterised in that A=70%, b=30%, c=80%.
Many scenes multi-targets recognition and tracking in a kind of security protection video the most according to claim 1, it is characterised in that Described color histogram, refers to the ratio that different color is shared in entire image, i.e. color image gamut value is carried out interval Divide, the number of pixels accounting in each interval is added up.
Many scenes multi-targets recognition and tracking in a kind of security protection video the most according to claim 1, it is characterised in that Described Hash encodes, through the following steps that obtain:
E, the high fdrequency component of removal image;
F, picture size is reduced to 8 × 8, containing 64 pixels;
G, by 8 × 8 dimensional drawings as being converted into gray level image, and calculate the average gray value of 64 pixels;
H, by the average gray value of 64 pixels and 8 × 8 dimensional drawings as in each grey scale pixel value compare, if the ash of pixel Angle value is less than average gray value, and the encoded radio of this pixel is 0, and otherwise encoded radio is 1;
I, all grey scale pixel values are compared with average gray value after, 64 obtained be encoded to Hash coding.
Many scenes multi-targets recognition and tracking in a kind of security protection video the most according to claim 1, it is characterised in that Described step (2), specifically refers to: manually determines the targeted object region gathered in image according to coordinate points, extracts target pair The feature of elephant, including color histogram and the Hash coding of destination object.
Many scenes multi-targets recognition and tracking in a kind of security protection video the most according to claim 1, it is characterised in that Described step (4), uses and differentiates that destination object is tracked by sparse similarity graph method, specifically refer to:
J, initially differentiate template set: assume that (h v) is the central point of destination object minimum rectangular area, described smallest rectangular area to Q Territory is the minimum rectangle image-region comprising certain destination object;(h v) refers to Q (h, coordinate figure v);With Q, (h, in v) being In the border circular areas of the heart, this border circular areas radius meets Value be positive number, and be not more than minimum square / 2nd of the shorter length of side in shape region, take p sample image block as initial positive template storehouse, QiIt it is i-th sample The central point of image block, 1≤i≤p;Meet from radiusAnnular region in, n image of sampling Block, obtains original negative template base, QjIt is the central point of jth image block,It is the interior outer radius of annular region with ω;ω No more than 1/2nd of the shorter length of side of minimum rectangular area;
K, differentiate reverse rarefaction representation: differentiate that sparse similarity graph matrix represents between all candidate target objects and template set Relation, as shown in formula I:
argmin | | T - Y C | | 2 2 + λ Σ i | | c i | | 1 s . t . c i ≥ 0 , i = 1 , 2 , 3...... ( p + n ) . - - - ( I )
In formula I, C is for differentiating sparse similarity graph matrix, and T is template set, including initial positive template storehouse and original negative template base, Y is candidate target object.
CN201610805509.2A 2016-09-06 2016-09-06 Multi-scene multi-target recognition and tracking method in security video Pending CN106327502A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610805509.2A CN106327502A (en) 2016-09-06 2016-09-06 Multi-scene multi-target recognition and tracking method in security video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610805509.2A CN106327502A (en) 2016-09-06 2016-09-06 Multi-scene multi-target recognition and tracking method in security video

Publications (1)

Publication Number Publication Date
CN106327502A true CN106327502A (en) 2017-01-11

Family

ID=57787500

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610805509.2A Pending CN106327502A (en) 2016-09-06 2016-09-06 Multi-scene multi-target recognition and tracking method in security video

Country Status (1)

Country Link
CN (1) CN106327502A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107194406A (en) * 2017-05-09 2017-09-22 重庆大学 A kind of panorama machine vision target identification method based on CS characteristic values
CN108665478A (en) * 2018-05-11 2018-10-16 西安天和防务技术股份有限公司 Multi-target Data Association under a kind of complex scene
CN108664852A (en) * 2017-03-30 2018-10-16 北京君正集成电路股份有限公司 Method for detecting human face and device
CN108932509A (en) * 2018-08-16 2018-12-04 新智数字科技有限公司 A kind of across scene objects search methods and device based on video tracking
CN109584213A (en) * 2018-11-07 2019-04-05 复旦大学 A kind of selected tracking of multiple target number
CN109670532A (en) * 2018-11-23 2019-04-23 腾讯科技(深圳)有限公司 Abnormality recognition method, the apparatus and system of organism organ-tissue image
WO2019154029A1 (en) * 2018-02-12 2019-08-15 北京宝沃汽车有限公司 Method for searching for target object, and apparatus and storage medium
CN111274435A (en) * 2018-12-04 2020-06-12 北京奇虎科技有限公司 Video backtracking method and device, electronic equipment and readable storage medium
CN111652909A (en) * 2020-04-21 2020-09-11 南京理工大学 Pedestrian multi-target tracking method based on deep hash characteristics
CN111784750A (en) * 2020-06-22 2020-10-16 深圳日海物联技术有限公司 Method, device and equipment for tracking moving object in video image and storage medium
CN112183249A (en) * 2020-09-14 2021-01-05 北京神州泰岳智能数据技术有限公司 Video processing method and device
CN113559506A (en) * 2021-09-24 2021-10-29 深圳易帆互动科技有限公司 Automatic testing method and device for frame synchronization and readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101887587A (en) * 2010-07-07 2010-11-17 南京邮电大学 Multi-target track method based on moving target detection in video monitoring
CN103150740A (en) * 2013-03-29 2013-06-12 上海理工大学 Method and system for moving target tracking based on video
CN103489199A (en) * 2012-06-13 2014-01-01 通号通信信息集团有限公司 Video image target tracking processing method and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101887587A (en) * 2010-07-07 2010-11-17 南京邮电大学 Multi-target track method based on moving target detection in video monitoring
CN103489199A (en) * 2012-06-13 2014-01-01 通号通信信息集团有限公司 Video image target tracking processing method and system
CN103150740A (en) * 2013-03-29 2013-06-12 上海理工大学 Method and system for moving target tracking based on video

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
BIN SUN ET AL.: "Multiple Objects Tracking and Identification Based on Sparse Representation in Surveillance Video", 《2015 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA BIG DATA》 *
BOHAN ZHUANG ET AL.: "Visual Tracking via Discriminative Sparse Similarity Map", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 *
LIWEI CHEN ET AL.: "Multi-Object Tracking Based on Multi-Feature Fusion with Adaptive Weights", 《IET INTERNATIONAL COMMUNICATION CONFERENCE ON WIRELESS MOBILE AND COMPUTING》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108664852A (en) * 2017-03-30 2018-10-16 北京君正集成电路股份有限公司 Method for detecting human face and device
CN108664852B (en) * 2017-03-30 2022-06-28 北京君正集成电路股份有限公司 Face detection method and device
CN107194406A (en) * 2017-05-09 2017-09-22 重庆大学 A kind of panorama machine vision target identification method based on CS characteristic values
WO2019154029A1 (en) * 2018-02-12 2019-08-15 北京宝沃汽车有限公司 Method for searching for target object, and apparatus and storage medium
CN108665478A (en) * 2018-05-11 2018-10-16 西安天和防务技术股份有限公司 Multi-target Data Association under a kind of complex scene
CN108932509A (en) * 2018-08-16 2018-12-04 新智数字科技有限公司 A kind of across scene objects search methods and device based on video tracking
CN109584213A (en) * 2018-11-07 2019-04-05 复旦大学 A kind of selected tracking of multiple target number
CN109584213B (en) * 2018-11-07 2023-05-30 复旦大学 Multi-target number selection tracking method
CN109670532A (en) * 2018-11-23 2019-04-23 腾讯科技(深圳)有限公司 Abnormality recognition method, the apparatus and system of organism organ-tissue image
CN109670532B (en) * 2018-11-23 2022-12-09 腾讯医疗健康(深圳)有限公司 Method, device and system for identifying abnormality of biological organ tissue image
CN111274435A (en) * 2018-12-04 2020-06-12 北京奇虎科技有限公司 Video backtracking method and device, electronic equipment and readable storage medium
CN111652909A (en) * 2020-04-21 2020-09-11 南京理工大学 Pedestrian multi-target tracking method based on deep hash characteristics
CN111784750A (en) * 2020-06-22 2020-10-16 深圳日海物联技术有限公司 Method, device and equipment for tracking moving object in video image and storage medium
CN112183249A (en) * 2020-09-14 2021-01-05 北京神州泰岳智能数据技术有限公司 Video processing method and device
CN113559506A (en) * 2021-09-24 2021-10-29 深圳易帆互动科技有限公司 Automatic testing method and device for frame synchronization and readable storage medium

Similar Documents

Publication Publication Date Title
CN106327502A (en) Multi-scene multi-target recognition and tracking method in security video
Amit et al. Disaster detection from aerial imagery with convolutional neural network
CN102214291B (en) Method for quickly and accurately detecting and tracking human face based on video sequence
CN104978567B (en) Vehicle checking method based on scene classification
CN101339601B (en) License plate Chinese character recognition method based on SIFT algorithm
CN101814144B (en) Water-free bridge target identification method in remote sensing image
CN105023008A (en) Visual saliency and multiple characteristics-based pedestrian re-recognition method
CN105678231A (en) Pedestrian image detection method based on sparse coding and neural network
CN103839065A (en) Extraction method for dynamic crowd gathering characteristics
CN102184550A (en) Mobile platform ground movement object detection method
CN101464946A (en) Detection method based on head identification and tracking characteristics
Yuan et al. Learning to count buildings in diverse aerial scenes
CN111753682B (en) Hoisting area dynamic monitoring method based on target detection algorithm
CN106373146A (en) Target tracking method based on fuzzy learning
CN111507296A (en) Intelligent illegal building extraction method based on unmanned aerial vehicle remote sensing and deep learning
CN104517095A (en) Head division method based on depth image
CN103886760A (en) Real-time vehicle type detection system based on traffic video
CN110210418A (en) A kind of SAR image Aircraft Targets detection method based on information exchange and transfer learning
CN104599291B (en) Infrared motion target detection method based on structural similarity and significance analysis
Zhigang et al. Vehicle target detection based on R-FCN
CN103456029A (en) Mean Shift tracking method for resisting similar color and illumination variation interference
Xiao et al. 3D urban object change detection from aerial and terrestrial point clouds: A review
Mao et al. A dataset and ensemble model for glass façade segmentation in oblique aerial images
CN105069403B (en) A kind of three-dimensional human ear identification based on block statistics feature and the classification of dictionary learning rarefaction representation
Li et al. Insect detection and counting based on YOLOv3 model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170111