CN104298955A - Human head detection method and device - Google Patents

Human head detection method and device Download PDF

Info

Publication number
CN104298955A
CN104298955A CN201310302623.XA CN201310302623A CN104298955A CN 104298955 A CN104298955 A CN 104298955A CN 201310302623 A CN201310302623 A CN 201310302623A CN 104298955 A CN104298955 A CN 104298955A
Authority
CN
China
Prior art keywords
people
similarity
color
histogram
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201310302623.XA
Other languages
Chinese (zh)
Inventor
赵勇
李晶晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHENZHEN ZHENBANG INDUSTRY Co Ltd
Original Assignee
SHENZHEN ZHENBANG INDUSTRY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHENZHEN ZHENBANG INDUSTRY Co Ltd filed Critical SHENZHEN ZHENBANG INDUSTRY Co Ltd
Priority to CN201310302623.XA priority Critical patent/CN104298955A/en
Publication of CN104298955A publication Critical patent/CN104298955A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a human head detection method and device. The method comprises the steps that images to be detected are scanned according to windows with the preset dimension; the color self-similarity characteristic of each scanning window is acquired; and human head/non-human head judgment is performed on the scanning window bys utilizing a classifier. The beneficial effects of the method are that a CSS is selected to act as characteristic representation of a human head, and an experiment proves that the CSS characteristic is a robust characteristic and detection performance can be enhanced; and meanwhile, the human heads in the video images can be detected in combination with SVM classifier learning training so that the method can be applied to human body tracking, event detection and human flow statistics, and thus the method has important meaning for realization of automatic monitoring.

Description

A kind of people's head inspecting method and device
Technical field
The application relates to video monitoring and technical field of image processing, particularly relates to a kind of people's head inspecting method and a kind of number of people pick-up unit.
Background technology
In field of intelligent video surveillance, be an important research contents to the Intelligent Measurement of the number of people and statistics.But the gray difference of the number of people of different attitude is comparatively large, such as, from number of people front, side, the back side and top shooting number of people gray difference can be very large, make to realize the number of people and detect more difficult.Therefore, the feature how extracting robust is the importance affecting number of people detection and statistics accuracy rate and real-time.
Summary of the invention
The application provides a kind of people's head inspecting method and corresponding device.
According to the first aspect of the application, the application provides a kind of people's head inspecting method, comprising: treat detected image and scan by the window of preliminary dimension; Obtain the color self-similarity characteristics of each scanning window; Sorter is utilized to carry out the number of people/non-number of people judgement to scanning window.
Further, the acquisition of described color self-similarity characteristics comprises: quantization step, described image to be detected is carried out color space conversion, is converted to HSV space, obtains H, S, V tri-components, carries out uniform quantization to described H, S, V tri-components; Blocking step, divides equally described image to be detected and carries out piecemeal by pre-sizing; Histogram calculation step, calculates the color histogram of each described piecemeal in HSV space; Similarity Measure step, calculates the similarity between any two piecemeals in all described piecemeals, obtains all Similarity value; Normalization step, carries out cascade by described all Similarity value, is normalized cascade result, and result is color self-similarity feature.
Preferably, in described quantization step, described H, S, V tri-components are carried out the uniform quantization of 3*3*3; In described blocking step, described pre-sizing is 8*8 pixel size.
Preferably, in described histogram calculation step, in HSV space, after Tri linear interpolation is carried out to each pixel in each described piecemeal, then calculate color histogram.
Preferably, in described Similarity Measure step, the calculating of described similarity adopts histogram intersection method.
Preferably, in described normalization step, described normalized adopts L2-Norm to realize.
Further, described sorter is that described sample set comprises the positive sample of the number of people and number of people negative sample by extracting color self-similarity characteristics and training in advance obtains based on support vector machine from sample set.
According to the second aspect of the application, the application provides a kind of number of people pick-up unit, comprising: window scan module, scans by the window of preliminary dimension for treating detected image; Feature acquisition module, for obtaining the color self-similarity characteristics of each scanning window; Classification judging module, carries out the number of people/non-number of people judgement for utilizing sorter to scanning window.
Further, described feature acquisition module comprises: quantifying unit, for described image to be detected is carried out color space conversion, is converted to HSV space, obtains H, S, V tri-components, carries out uniform quantization to described H, S, V tri-components; Blocking unit, carries out piecemeal for being divided equally by described image to be detected by pre-sizing; Histogram calculation unit, for calculating the color histogram of each described piecemeal in HSV space; Similarity calculated, for calculating the similarity in all described piecemeals between any two piecemeals, obtains all Similarity value; Normalization unit, for described all Similarity value are carried out cascade, is normalized cascade result, and result is color self-similarity feature.
Preferably, in described histogram calculation unit, in HSV space, after Tri linear interpolation is carried out to each pixel in each described piecemeal, then calculate color histogram; In described similarity calculated, the calculating of described similarity adopts histogram intersection method; In described normalization unit, described normalized adopts L2-Norm to realize.
The beneficial effect of the application is: selected CSS as the character representation of the number of people, and experiment proves, CSS feature is a kind of feature of robust, can improve the performance detected; Meanwhile, in conjunction with SVM classifier learning training, and then the number of people in video image can be detected, may be used for human body tracking, event detection, people flow rate statistical etc., significant for realizing automatic monitoring.
Accompanying drawing explanation
Fig. 1 is the schematic flow sheet of people's head inspecting method of a kind of embodiment of the application;
Fig. 2 is the schematic flow sheet of the extraction CSS feature of a kind of embodiment of the application;
Fig. 3 is that the eigenwert of a kind of embodiment of the application and the mapping relations of HSV component represent intention;
Fig. 4 is the structural representation of the number of people pick-up unit of a kind of embodiment of the application.
Embodiment
By reference to the accompanying drawings the present invention is described in further detail below by embodiment.
Because number of people face, shoulder etc. have symmetry, even the number of people of pitching certain angle or the number of people of side, still there is great similarity in the color histogram in image between different piecemeal, therefore, the application utilizes the similarity of this color space, the statistical learning method proposed based on color self similarity (CSS, Color Self-Similarity) feature detects the number of people.In addition, this feature has unchangeability to yardstick, is conducive to multiple scale detecting fast.
Embodiment 1:
As shown in Figure 1, people's head inspecting method of the present embodiment comprises training process and testing process two parts.Wherein, training process comprises step: obtain training sample set S11, extract CSS feature S13 and sorter training S15; Testing process comprises step: scan image S21 to be detected, obtain the CSS feature S23 of each scanning window and carry out the number of people/non-number of people judgement S25 to scanning window.
For acquisition training sample set S21, obtain by video image obtain manner.Training sample set comprises the positive sample of the number of people and number of people negative sample.Wherein, the positive sample of the number of people can comprise from number of people front, side, the back side and top shooting number of people image, namely cover the true number of people image of different attitude, different hair, the different cap of band; And number of people negative sample can comprise the image that such as landscape, animal, word etc. do not comprise arbitrarily the number of people.
For scanning image S23 to be detected, can scan by the window of certain size size the image to be detected of input, first treat detected image and carry out pyramid convergent-divergent, namely the Multiscal process of a coarse resolution to smart resolution is defined, then on the image of each yardstick, carry out intensive scanning according to the candidate's number of people window with training sample set formed objects, each candidate's number of people window is input in sorter, the candidate's number of people window being identified as the number of people can be left.
Carry out the number of people/non-number of people judgement S25 for sorter training S15 with to scanning window, the correlation technique of pattern-recognition can be adopted to realize.The present embodiment adopts support vector machine (SVM, Support Vector Machine) to realize the training study of sorter, and trains the categorised decision obtained to adjudicate according to SVM in testing process.SVM is a kind of conventional machine learning method, and its core concept is, at the inseparable data set of lower dimension Linear, a kind of nonlinear mapping algorithm can be used to be translated into more high-dimensional space, data set can be divided.It needs to seek optimum classifying face, and not only requirement can separate faultless for two class samples, also requires that the class interval of two class samples is maximum.Concrete training method can realize based on existing SVM theory.During specific implementation, can realize training the SVM of CSS feature according to existing SVM knowwhy coding, also directly can call existing SVM storehouse and carry out SVM training, such as use the SVM tool box of Taiwan woods intelligence benevolence (Chih-Jen Lin) professor's exploitation, thus the support vector machine, threshold value etc. that can obtain required for scanning window judgement, namely optimum classifying face, then in testing process, carries out the judgement of the number of people/non-number of people to scanning window.If court verdict is just, then this window comprises the number of people, otherwise is inhuman head region.Treat detected image due to whole testing process and carry out pyramid convergent-divergent, the detection window that a lot of are classified as the number of people can be obtained, mean-shift algorithm is used to merge these windows, by multiple in position with size on can be considered to same destination object window merge become a window, finally testing result is shown.Research shows, the feature of robust can improve the performance of detection.Based on this, the present embodiment propose extraction CSS feature flow process as shown in Figure 2, correspondingly, the CSS characteristic S23 of each scanning window of acquisition that can realize in testing process shown in Fig. 1 according to this flow process.
Usually in classification problem, different sample sizes is selected according to the difference of class, such as, the training picture of 24*24 pixel size is selected in Face datection frequently-used data storehouse, in pedestrian detection, INRIA database selects the training picture of 64*128 pixel size, and in the number of people detects, there is no general database, by the number of people database of camera shooting for training and detect.Here the training sample size detected for the number of people is described for 40*40 pixel, certainly can adopt other pixel size in other embodiment, not be restricted this.CSS feature extraction specifically comprises the steps S131 ~ S139:
Step S131, H, each component uniform quantization of S, V.
For the image to be detected of input, first carry out color space conversion, by image from RGB(Red, Green, Blue) space transforming to HSV(Hue, Saturation, Value) space, obtain H, S, V tri-passages.In HSV space, H, S, V tri-components are carried out uniform quantization.Uniform quantization refers to and quantized interval is divided into some parts.Because the maximin of each passage of H, S, V is determined, after the method for selected uniform quantization, quantized value can be determined.In embodiment, in HSV space, H, S, V tri-components are carried out the uniform quantization of 3*3*3, be divided into 3 parts by each passage.Can call OpenCV storehouse (Open Source Computer Vision Library) when realizing to realize, the concrete quantized value obtained is as follows:
H = 0 , h ∈ [ 0,60 ) 1 , h ∈ [ 60,120 ) 2 , h ∈ [ 0,180 ] S = 0 , s ∈ [ 0,85 ) 1 , s ∈ [ 85,170 ) 2 , s ∈ [ 170,225 ) V = 0 , v ∈ [ 0,85 ) 1 , v ∈ [ 85,170 ) 2 , v ∈ [ 170,255 )
Can find out, each passage of H, S and V of pixel will drop on 0,1 or 2 any one interval, and therefore, the total 3*3*3=27 kind combination of the interval distribution that pixel value may drop on, uses eigenwert k=0,1,2 respectively ..., 26 represent.The mapping of definition k=f (H, S, V), can obtain mapping relations table as shown in Figure 3.Therefore under this quantification manner, the dimension of the color histogram of HSV space has 27 dimensions, represents respectively with H (k).
Step S133, image block.
What consider due to color self-similarity characteristics is similarity between different block in color distribution, as two eye portion, hair portion, shoulders etc. have great color self-similarity, the very large piece of similarity as 20*20 is too coarse, be unfavorable for the characteristic of division extracting robust, very little block is then not enough to characterize the representational part of the number of people as 2*2, is unfavorable for distinguishing with other class.Therefore, the block of the pixel sizes such as 8*8,10*10 is selected in general consideration.For 8*8 in the present embodiment, image uniform is divided into the block of 8*8 pixel, totally 25 blocks.
Step S135, adds up the color histogram H of each piecemeal in HSV space p(k), wherein p is the sequence of block, p=1,2 ..., 25.
In a kind of embodiment, this step detailed process is: carry out Tri linear interpolation to each pixel in each piecemeal, is equivalent to carry out Tri linear interpolation to the value of H, S, V on three passages respectively, asks for the contribution of each pixel to color histogram.Use h 0=0, h 1=60, h 2=120, h 3the border of=180 expression H passage quantized intervals, in like manner uses s 0, s 1, s 2, s 3represent the border of channel S quantized interval, v 0, v 1, v 2, v 3represent the border of V passage quantized interval; Use h m0=30, h m1=90, h m2the intermediate value of=150 expression H passage quantized intervals, in like manner uses s m0, s m1, s m2represent the intermediate value of channel S quantized interval, v m0, v m1, v m2represent the intermediate value of V passage quantized interval; Use h lenthe width of=60 expression H passage quantized intervals, in like manner obtains the width s of the quantized interval of channel S and V passage len=v len=85; Use h w[3] represent the weight to its three quantized interval contributions in H channel linear Interpolation Process, in like manner use s w[3], v w[3] weight that channel S and V passage are contributed respective quantized interval is represented; Then, for each pixel and h, s, v value thereof, its Tri linear interpolation process can by following pseudo-representation:
if(h>=h 0&&h<h m0)
h w[0]=1.0;
else?if(h>=h m0&&h<h m1)
h w[0]=(h m1-h)/h len;
h w[1]=1-h w[0];
else?if(h>=h m1&&h<h m2)
h w[1]=(h m2-h)/h len;
h w[2]=1-h w[1];
else?if(h>=h m2&&h<=h 3)
h w[2]=1.0;
In like manner, s can be obtained w[3] and v w[3] value.
Then this pixel to the contribution of each color histogram is:
H[k]=H[f(i,j,m)]=H[k]+h w[i]×s w[j]×v w[m]
Wherein, the span of i, j, m is { 0,1,2}.
Step S137, adds up the similarity between any two blocks.
The calculating of similarity can adopt similarity algorithm conventional in image processing techniques to realize, such as Euclidean distance method, correlation coefficient process and histogram intersection method etc.As adopted Euclidean distance method, its Euclidean distance is less, represents that image is more similar; As adopted correlation coefficient process, its related coefficient is generally arranged between-1 and 1, and-1 represents completely uncorrelated, and 1 represents identical.Here, embodiment employing histogram intersection method calculates the similarity S (x, y) between any two piecemeals, and formula is as follows, and wherein the implication of each parameter is with aforementioned.
S ( x , y ) = &Sigma; k = 0 M - 1 min ( H x ( k ) , H y ( k ) ) &Sigma; k = 0 M - 1 H x ( k )
Wherein, x and y represents two piecemeals respectively, H x(k) and H yk () represents the color histogram of these two piecemeals respectively, combination when M is this three components of uniform quantization H, S, V, and as aforementioned, if carry out the uniform quantization of 3*3*3, now, above formula can be written as follows:
S ( x , y ) = &Sigma; k = 0 26 min ( H x ( k ) , H y ( k ) ) &Sigma; k = 0 26 H x ( k )
All piecemeals are all calculated, can obtain altogether individual Similarity value.
Step S139, by all Similarity value cascades and normalization.
In this step, 300 Similarity value obtained by step S137 carry out cascade (cascade), are connected into the vector of one 300 dimension by 300 one-dimensional vector.Then color self-similarity feature is after being normalized.The mode of normalized can be the normalization mode that L1-Norm, L1-Sqrt and L2-Norm etc. are conventional.L2-Norm normalization mode is selected in embodiment.
People's head inspecting method that the present embodiment proposes, it has selected CSS as the character representation of the number of people, and experiment proves, CSS feature is a kind of feature of robust, can improve the performance detected; Meanwhile, in conjunction with SVM classifier learning training, and then the number of people in video image can be detected, may be used for human body tracking, event detection, significant for realizing automatic monitoring.
Embodiment 2:
As shown in Figure 4, the present embodiment provides a kind of number of people pick-up unit, comprising: for treat window scan module that detected image undertaken scanning by the window of preliminary dimension, for obtain the color self-similarity characteristics of each scanning window feature acquisition module and for utilizing sorter to carry out the classification judging module of the number of people/non-number of people judgement to scanning window.
Wherein, feature acquisition module comprises: quantifying unit, blocking unit, histogram calculation unit, similarity calculated and normalization unit.Quantifying unit is used for image to be detected to carry out color space conversion, is converted to HSV space, obtains H, S, V tri-components, carry out uniform quantization to H, S, V tri-components; Blocking unit is used for image to be detected to divide equally to carry out piecemeal by pre-sizing; Histogram calculation unit is for calculating the color histogram of each piecemeal in HSV space; Similarity calculated, for calculating the similarity in all piecemeals between any two piecemeals, obtains all Similarity value; Normalization unit is used for all Similarity value to carry out cascade, and be normalized cascade result, result is color self-similarity feature.Preferably, in histogram calculation unit, in HSV space, after Tri linear interpolation is carried out to each pixel in each piecemeal, then calculate color histogram; In similarity calculated, the calculating of similarity adopts histogram intersection method; In normalization unit, normalized adopts L2-Norm to realize.
The realization of each module and unit can associated description in reference example 1 above, does not repeat at this.
It will be appreciated by those skilled in the art that, in above-mentioned embodiment, all or part of step of various method can be carried out instruction related hardware by program and completes, this program can be stored in a computer-readable recording medium, and storage medium can comprise: ROM (read-only memory), random access memory, disk or CD etc.
Above content is in conjunction with concrete embodiment further description made for the present invention, can not assert that specific embodiment of the invention is confined to these explanations.For general technical staff of the technical field of the invention, without departing from the inventive concept of the premise, some simple deduction or replace can also be made.

Claims (10)

1. people's head inspecting method, is characterized in that, comprising:
Treat detected image to scan by the window of preliminary dimension;
Obtain the color self-similarity characteristics of each scanning window;
Sorter is utilized to carry out the number of people/non-number of people judgement to scanning window.
2. people's head inspecting method as claimed in claim 1, it is characterized in that, the acquisition of described color self-similarity characteristics comprises:
Quantization step, carries out color space conversion by described image to be detected, is converted to HSV space, obtains H, S, V tri-components, carries out uniform quantization to described H, S, V tri-components;
Blocking step, divides equally described image to be detected and carries out piecemeal by pre-sizing;
Histogram calculation step, calculates the color histogram of each described piecemeal in HSV space;
Similarity Measure step, calculates the similarity between any two piecemeals in all described piecemeals, obtains all Similarity value;
Normalization step, carries out cascade by described all Similarity value, is normalized cascade result, and result is color self-similarity feature.
3. people's head inspecting method as claimed in claim 2, is characterized in that,
In described quantization step, described H, S, V tri-components are carried out the uniform quantization of 3*3*3;
In described blocking step, described pre-sizing is 8*8 pixel size.
4. people's head inspecting method as claimed in claim 2, is characterized in that, in described histogram calculation step, in HSV space, after carrying out Tri linear interpolation, then calculate color histogram to each pixel in each described piecemeal.
5. people's head inspecting method as claimed in claim 2, is characterized in that, in described Similarity Measure step, the calculating of described similarity adopts histogram intersection method, and formula is:
S ( x , y ) = &Sigma; k = 0 M - 1 min ( H x ( k ) , H y ( k ) ) &Sigma; k = 0 M - 1 H x ( k )
Wherein, S (x, y) represents the similarity between two piecemeal x and y, H x(k) and H yk () represents the color histogram of x and y two piecemeals respectively, combination when M is this three components of uniform quantization H, S, V, k is integer.
6. people's head inspecting method as claimed in claim 2, is characterized in that, in described normalization step, described normalized adopts L2-Norm to realize.
7. people's head inspecting method as claimed in claim 1 or 2, it is characterized in that, described sorter is that described sample set comprises the positive sample of the number of people and number of people negative sample by extracting color self-similarity characteristics and training in advance obtains based on support vector machine from sample set.
8. a number of people pick-up unit, is characterized in that, comprising:
Window scan module, scans by the window of preliminary dimension for treating detected image;
Feature acquisition module, for obtaining the color self-similarity characteristics of each scanning window;
Classification judging module, carries out the number of people/non-number of people judgement for utilizing sorter to scanning window.
9. number of people pick-up unit as claimed in claim 8, it is characterized in that, described feature acquisition module comprises:
Quantifying unit, for described image to be detected is carried out color space conversion, is converted to HSV space, obtains H, S, V tri-components, carries out uniform quantization to described H, S, V tri-components;
Blocking unit, carries out piecemeal for being divided equally by described image to be detected by pre-sizing;
Histogram calculation unit, for calculating the color histogram of each described piecemeal in HSV space;
Similarity calculated, for calculating the similarity in all described piecemeals between any two piecemeals, obtains all Similarity value;
Normalization unit, for described all Similarity value are carried out cascade, is normalized cascade result, and result is color self-similarity feature.
10. number of people pick-up unit as claimed in claim 9, is characterized in that,
In described histogram calculation unit, in HSV space, after Tri linear interpolation is carried out to each pixel in each described piecemeal, then calculate color histogram;
In described similarity calculated, the calculating of described similarity adopts histogram intersection method;
In described normalization unit, described normalized adopts L2-Norm to realize.
CN201310302623.XA 2013-07-15 2013-07-15 Human head detection method and device Withdrawn CN104298955A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310302623.XA CN104298955A (en) 2013-07-15 2013-07-15 Human head detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310302623.XA CN104298955A (en) 2013-07-15 2013-07-15 Human head detection method and device

Publications (1)

Publication Number Publication Date
CN104298955A true CN104298955A (en) 2015-01-21

Family

ID=52318676

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310302623.XA Withdrawn CN104298955A (en) 2013-07-15 2013-07-15 Human head detection method and device

Country Status (1)

Country Link
CN (1) CN104298955A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106096553A (en) * 2016-06-06 2016-11-09 合肥工业大学 A kind of pedestrian traffic statistical method based on multiple features
CN111444850A (en) * 2020-03-27 2020-07-24 北京爱笔科技有限公司 Picture detection method and related device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101470802A (en) * 2007-12-28 2009-07-01 清华大学 Object detection apparatus and method thereof
CN101620673A (en) * 2009-06-18 2010-01-06 北京航空航天大学 Robust face detecting and tracking method
CN101877007A (en) * 2010-05-18 2010-11-03 南京师范大学 Remote sensing image retrieval method with integration of spatial direction relation semanteme
CN102136147A (en) * 2011-03-22 2011-07-27 深圳英飞拓科技股份有限公司 Target detecting and tracking method, system and video monitoring device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101470802A (en) * 2007-12-28 2009-07-01 清华大学 Object detection apparatus and method thereof
CN101620673A (en) * 2009-06-18 2010-01-06 北京航空航天大学 Robust face detecting and tracking method
CN101877007A (en) * 2010-05-18 2010-11-03 南京师范大学 Remote sensing image retrieval method with integration of spatial direction relation semanteme
CN102136147A (en) * 2011-03-22 2011-07-27 深圳英飞拓科技股份有限公司 Target detecting and tracking method, system and video monitoring device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
S. WALK等: "New Features and Insights for Pedestrian Detection", 《PROCEEDINGS OF THE IEEE CONFERENCE ON COMPUTER》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106096553A (en) * 2016-06-06 2016-11-09 合肥工业大学 A kind of pedestrian traffic statistical method based on multiple features
CN111444850A (en) * 2020-03-27 2020-07-24 北京爱笔科技有限公司 Picture detection method and related device
CN111444850B (en) * 2020-03-27 2023-11-14 北京爱笔科技有限公司 Picture detection method and related device

Similar Documents

Publication Publication Date Title
CN102831447B (en) Method for identifying multi-class facial expressions at high precision
CN104866616B (en) Monitor video Target Searching Method
CN102902959B (en) Face recognition method and system for storing identification photo based on second-generation identity card
CN104732601B (en) Automatic high-recognition-rate attendance checking device and method based on face recognition technology
CN103034838B (en) A kind of special vehicle instrument type identification based on characteristics of image and scaling method
CN101236608B (en) Human face detection method based on picture geometry
CN103617414B (en) The fire disaster flame of a kind of fire color model based on maximum margin criterion and smog recognition methods
CN107622229A (en) A kind of video frequency vehicle based on fusion feature recognition methods and system again
CN104268590B (en) The blind image quality evaluating method returned based on complementary combination feature and multiphase
CN104063722A (en) Safety helmet identification method integrating HOG human body target detection and SVM classifier
WO2022178978A1 (en) Data dimensionality reduction method based on maximum ratio and linear discriminant analysis
CN105005786A (en) Texture image classification method based on BoF and multi-feature fusion
CN106960176B (en) Pedestrian gender identification method based on transfinite learning machine and color feature fusion
CN108829711B (en) Image retrieval method based on multi-feature fusion
An et al. CBIR based on adaptive segmentation of HSV color space
CN102176208A (en) Robust video fingerprint method based on three-dimensional space-time characteristics
CN102695056A (en) Method for extracting compressed video key frames
CN108960142B (en) Pedestrian re-identification method based on global feature loss function
CN103745197B (en) A kind of detection method of license plate and device
CN109829924A (en) A kind of image quality evaluating method based on body feature analysis
CN109829905A (en) It is a kind of face beautification perceived quality without reference evaluation method
CN103473545A (en) Text-image similarity-degree measurement method based on multiple features
CN102034107A (en) Unhealthy image differentiating method based on robust visual attention feature and sparse representation
CN108073940A (en) A kind of method of 3D object instance object detections in unstructured moving grids
Li et al. Codemaps-segment, classify and search objects locally

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 518055 Baoan, Guangdong, Shiyan with Beverly science and Technology Innovation Park 8-6

Applicant after: SHENZHEN ZHENBANG INTELLIGENT TECHNOLOGY CO., LTD.

Address before: 518000 Guangdong, Nanshan District Province, Dragon Ball Road, Longjing, the second industrial zone A building, building B, building 3, floor, building C, building 3, building five, building 3, building

Applicant before: Shenzhen Zhenbang Industry Co., Ltd.

COR Change of bibliographic data
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20150121