CN106056060A - Method and system for masked veil detection in video image - Google Patents
Method and system for masked veil detection in video image Download PDFInfo
- Publication number
- CN106056060A CN106056060A CN201610356813.3A CN201610356813A CN106056060A CN 106056060 A CN106056060 A CN 106056060A CN 201610356813 A CN201610356813 A CN 201610356813A CN 106056060 A CN106056060 A CN 106056060A
- Authority
- CN
- China
- Prior art keywords
- pedestrian
- detection
- point
- background
- video image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/23—Recognition of whole body movements, e.g. for sport training
- G06V40/25—Recognition of walking or running movements, e.g. gait recognition
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a method and system for masked veil detection in a video image. According to the method, after a video stream is read in, a background model is built through adoption of mixed gauss background modeling, and a pixel point of a mobile object is got by utilizing a background subtraction method and then is taken as a foreground point; morphological operation is performed on the foreground point, a noise is removed, a hole is filled at the same time, a connected region is obtained and marked, and a continuously marked region is extracted and is taken as a final foreground region; extraction of characteristics of the mobile object is performed on the final foreground region; a classifier is adopted to classify the extracted characteristics, and whether a pedestrian exists at the foreground region is detected; when the pedestrian is detected, a head and shoulder position of the pedestrian is detected, and, on this basis, whether a human face of the pedestrian is complete; and, if the human face is not complete, the pedestrian is marked, a movement direction of the pedestrian is continuously tracked and recorded, and, when the movement direction of the pedestrian is determined to be a predetermined direction, an alarm is given out. So an effective detection means is provided for the effective masked veil detection in the video image.
Description
Technical field
The invention belongs to image identification technical field, be specifically related to the method for carpe detection in a kind of video image and be
System.
Background technology
In some unsafe areas, for the needs of social safety, need some personages wearing special dress ornament are entered
Row detects identification especially, accomplishes prevention and control in advance.Although corresponding region can be entered with the image in acquisition monitoring region at present
Row in real time monitoring, but cannot some non-persistent personages targetedly, monitor knowledge in advance as worn the personage of carpe
Not and carry out early warning.
Summary of the invention
It is an object of the invention to solve above-mentioned technical problem and the side of carpe detection in a kind of video image is provided
Method and system.
For achieving the above object, the present invention adopts the following technical scheme that
A kind of method of carpe detection in video image, comprises the following steps:
Read in video flowing and use mixed Gaussian background modeling to set up background model;
Background subtraction method and background model is utilized to ask for the pixel of mobile object as foreground point;
Foreground point is carried out morphology operations, removes noise and hole is filled with simultaneously, obtain UNICOM region and carry out
Labelling, and using continued labelling extracted region out as final foreground area;
The foreground area that this is final carries out the feature extraction of mobile object;
Using adaboost grader to classify the feature extracted, whether detection foreground area exists pedestrian;
When pedestrian having been detected, the head and shoulder model inspection preset is utilized to go out the head and shoulder position of this pedestrian, and basis at this
The face of upper detection pedestrian is the most complete;
If face is imperfect, labelling also keeps track of the pedestrian of this labelling continuously, examines the direction of motion of this pedestrian
Survey, judging that the direction of motion of this pedestrian is reported to the police when being predetermined direction.
The described haar feature that is characterized as, the face of described detection pedestrian the most completely uses eigenface detection method.
The described pixel utilizing background subtraction method and background model to ask for mobile object as the step of foreground point is: sentence
Whether disconnected current modeling point mates with corresponding background model, if coupling, current modeling point is background dot, the most currently models
Point is foreground point.
Whether described judgement currently models a little mates employing following steps with corresponding background model:
Judge the variable quantity of the background model that current modeling point is corresponding whether in the range of default matching threshold and or work as
Whether the average gradient of front modeling point differs with the gradient in background model in the range of predetermined percentage threshold, if there being one
Mate then not thinking.
Described matching threshold is expressed as:
Thresholdi=(Valuebase+Theta)*Sensitivit y
Wherein, ValuebaseBeing basic threshold, Theta is variance, and Sensitivit y is sensitivity.
The present invention also aims to provide the system of carpe detection in a kind of video image, including:
Background module sets up module, is used for reading in video flowing and using mixed Gaussian background modeling to set up background model;
Foreground point acquisition module, for utilizing background subtraction method and background model to ask for the pixel of mobile object as front
Sight spot;
Foreground area determines module, for foreground point carries out morphology operations, removes noise and fills out hole simultaneously
Fill, obtain UNICOM region and be marked, and using continued labelling extracted region out as final foreground area;
Characteristic extracting module, for carrying out the feature extraction of mobile object in this final foreground area;
Pedestrian detection module, for using adaboost grader to classify the feature extracted, detects foreground area
Whether there is pedestrian;
Face detection module, has in when pedestrian having been detected, utilizes the head and shoulder model inspection preset to go out the head of this pedestrian
Shoulder position, and the face detecting pedestrian on this basis is the most complete;
Direction of motion detection module, for detecting that face is imperfect and after labelling, keeping track of this labelling continuously
The direction of motion of this pedestrian is also detected by pedestrian, and judges that the direction of motion of this pedestrian is to report during predetermined direction
Alert.
The present invention, by above technical scheme, can detect the moving object in image also rapidly in video image
Determine whether pedestrian, to detect face, the head and shoulder position detecting pedestrian after being judged as pedestrian the most completely judges that pedestrian is
No carpe the tracking pedestrians direction of motion worn, and report to the police when traffic direction is predetermined warning direction, for regarding
Frequently the masked man that in image, carpe is worn in quick detection provides one and effectively detects recognition methods.
Accompanying drawing explanation
The flow chart of the method for carpe detection in the video image that Fig. 1 provides for the embodiment of the present invention;
The schematic diagram of Haar feature shown in Fig. 2.
Detailed description of the invention
Below, in conjunction with example, substantive distinguishing features and the advantage of the present invention are further described, but the present invention not office
It is limited to listed embodiment.
Shown in Figure 1, a kind of method of carpe detection in video image, including:
S101, reads in video flowing and uses mixed Gaussian background modeling to set up background model;
S102, utilizes background subtraction method and background model to ask for the pixel of mobile object as foreground point;
S103, carries out morphology operations to foreground point, removes noise and is filled with hole simultaneously, obtains UNICOM region also
It is marked, and using continued labelling extracted region out as final foreground area;
S104, carries out the feature extraction of mobile object in the foreground area that this is final
Wherein, haar feature it is characterized as described in;
S105, uses adaboost grader to classify the feature extracted, and whether detection foreground area exists pedestrian;
In the present invention, described grader is mainly adaboost cascade of strong classifiers;
It is i.e. the adaboost grader feature to extracting will to be used to input the pedestrian's grader trained, detect prospect
Whether region exists pedestrian, i.e. judges whether pedestrian, if do not existed, returning S101 and continues detection, if existing, carrying out next
Step;
S106, when pedestrian having been detected, utilizes the head and shoulder model inspection preset to go out the head and shoulder position of this pedestrian, and at this
On the basis of to detect the face of pedestrian the most complete;
The most complete at detection face, it is mainly based upon the detection of eigenface (Eigenface) method and carries out, detect whether to deposit
In effective face information, its face is imperfect, and labelling carries out a step;
S107, if face is imperfect, labelling, and keep track of the pedestrian of this labelling continuously, the direction of motion to this pedestrian
Carry out detecting, labelling, judging that the direction of motion of this pedestrian is reported to the police when being predetermined direction.
Concrete, namely judge the direction of motion of the incomplete pedestrian of this face for faced by photographic head time carry out warning and carry
Show.
The present invention, by above technical scheme, can detect the moving object in image also rapidly in video image
Determine whether pedestrian, to detect face, the head and shoulder position detecting pedestrian after being judged as pedestrian the most completely judges that pedestrian is
No carpe the tracking pedestrians direction of motion worn, and report to the police when traffic direction is predetermined warning direction, for regarding
Frequently the masked man that in image, carpe is worn in quick detection provides one and effectively detects recognition methods.
Below, in conjunction with concrete background modeling method, haar feature extraction, use adaboost grader to haar feature
Classification illustrates that the present invention's realizes process.
Step1: read in video flowing;
Step2: use mixed Gaussian background modeling, set up background model;
Because video camera is fixed viewpoint so detected pedestrian is mobile target or mobile object, use mixed Gaussian background
Modeling, can be mobile object by target lock-on after setting up background model, and provides rational target area for subsequent detection.
The flow process of background modeling mainly includes three steps: the initialization of background model, the coupling of background model update, the back of the body
Scape learning success also detects prospect.Modeling point is divided into foreground point and background dot two class.Need to use difference during renewal learning
Speed its Gauss model mated is updated.
Utilize present image to model point data for the first frame data to initialize the Gauss model distributed at first, depend on afterwards
Model average, variance and the weight of correspondence is constantly trained according to the data of modeling point.When modeling point weight reaches being modeled as of setting
Merit threshold value, illustrates that this modeling point models successfully, otherwise continues study, until weight meets threshold value.
By modeling point Gauss model parameter is constantly trained and learnt, increasing modeling point models successfully, system
Counting modeling successfully modeling point quantity in whole two field picture, if reaching the 1/5 of entire image modeling point sum, then Background learning becomes
Merit.Subsequently into the foreground detection stage, and the foreground point detected and background dot are updated background model with different rates respectively,
To improve the adaptability of background model.
Step3: background modeling enters the foreground detection stage after completing, and i.e. utilizes background subtraction method to be extracted by mobile object
It is used as prospect, mainly asks for what the pixel of mobile object or mobile target realized as foreground point.Concrete is permissible
Current modeling point is utilized to carry out mating realizing with corresponding background model.The judgment threshold of coupling is and corresponding background model side
Difference is relevant, and model can be according to the adaptive adjustment of the change of its value, and the Model Matching threshold value of each color component is represented by:
Thresholdi=(Valuebase+Theta)*Sensitivit y
Wherein, ValuebaseBeing basic threshold, Theta is variance, and Sensitivit y is sensitivity.If modeling is put and it
The variable quantity of background model, in this threshold range, illustrates modeling point and this Model Matching.In addition current modeling point is average
Gradient differs by more than certain value with the gradient in background model, such as 20%, is i.e. considered not mate.
If additionally in background model, background model does not model successfully, then being defaulted as this modeling point is background dot.Building
Background model corresponding to mould point successfully in the case of, if this modeling point does not mates with its background model, then this modeling point is not necessarily
For foreground point, also to do with the background model of the modeling point of its four neighborhood and mate, if coupling is the most unsuccessful, then this modeling point is true
It is set to foreground point, otherwise, this point is still that background dot.
Step4: foreground point carries out morphology operations, removes noise and is filled with hole simultaneously, obtain UNICOM region
And be marked.
Step5: using continued labelling extracted region out as final foreground area.
Step6: move the feature extraction of target or mobile object in foreground area.
Here feature extraction refers to haar feature extraction:
Haar feature is divided three classes: edge feature, linear character, central feature and diagonal feature, they are combined into spy
Levy template, in feature templates, have white and two kinds of rectangles of black, and the eigenvalue defining this template is white rectangle pixel and subtracts
Black removal rectangular pixels and.Haar eigenvalue reflects the grey scale change situation of image.
Some features of such as face simply can be described by rectangular characteristic, as eyes are deeper than cheek color, and the bridge of the nose
Both sides are deeper than bridge of the nose color, and face is deeper etc. than ambient color.But rectangular characteristic is only to some simple graphic structures, such as limit
Edge, line segment are more sensitive, so the structure at particular orientation (level, vertical, diagonal angle) can only be described, shown in Figure 2.
For this category feature of A, B and the D in Fig. 2, character numerical value computing formula is:
White-the Sum of v=Sum is black;
And for C, computing formula is as follows: the white-2*Sum of v=Sum is black;
Why by black region pixel be multiplied by 2, it is to make number of pixels in two kinds of rectangular areas consistent.By changing
Become feature templates size and location, can in image subwindow exhaustive go out substantial amounts of feature.The feature templates of upper figure is referred to as " special
Levy prototype ";Feature prototype extends the feature that (translation flexible) obtain in image subwindow and is referred to as " rectangular characteristic ";Rectangular characteristic
Value be referred to as " eigenvalue ".
Step7: use adaboost grader to classify features described above, whether detection foreground area exists effective row
People, then proceeds to next step as existed, otherwise re-reads next frame video.
Adboost grader principle: AdaBoost is a kind of iterative algorithm, adds a new weak typing in each wheel
Device, until reaching certain predetermined sufficiently small error rate.Each training sample is endowed a weight, shows that it is by certain
Individual grader is selected into the probability of training set.If certain sample point is classified exactly, then a training under construction
Concentrating, its selected probability is just lowered;On the contrary, if certain sample point is not classified exactly, then its weight
Just it is improved.
The algorithm flow of Adaboost is as follows:
First, the weights distribution of training data is initialized.Each training sample is endowed identical when starting most
Weight: 1/N.
If it follows that certain sample point is classified exactly, then under construction in a training set, it is selected
In probability be just lowered;On the contrary, if certain sample point is not classified exactly, then its weight is just improved.
Particularly, then:
2. for m=1,2 ..., M
A. use the training dataset study with weights distribution Dm, obtain basic binary classifier:
Gm (x): χ → {-1 ,+1}
B. the Gm (x) error in classification rate on training dataset is calculated
C. calculating the coefficient of Gm (x), am represents the Gm (x) significance level in final grader:
From above-mentioned formula, during em≤1/2, am >=0, and am increases along with the reduction of em, it is meant that classification is by mistake
The effect in final grader of the rate the least basic classification device is the biggest.
D. the weights distribution of training dataset is updated
Dm+1=(wM+1,1, wM+1,2…wM+1, i..., wM+1, N),
Make to be increased by the weights of basic classification device Gm (x) misclassification sample, and reduced by the weights of correct classification samples.
Like this, by such mode, AdaBoost method can " focus on " on those samples of more difficult point.
Wherein, Zm is standardizing factor so that Dm+1 becomes a probability distribution:
3. build the linear combination of basic classification device
Thus obtain final grader, as follows:
Step8: the pedestrian detected is utilized head and shoulder model, detects pedestrian's head and shoulder position.
Head and shoulder model presents Ω shape mainly by head and shoulder shape, screens.Gather the head and shoulder picture conduct of pedestrian
Positive sample, uses feature learning method above to carry out features training and study, obtains pedestrian's head and shoulder position.
Step9: detect complete face on this basis, then illustrates that this person is non-wear carpe people as there is complete face
Scholar, if there is not complete face then for masked, is marked masked personage.
Step10: trace labelling target continuously, travel direction detects.Masked personage is tracked labelling and passes through position
Put change calculations and go out the direction of target travel.
Step11: carry out alarm decision for above testing result, if incomplete face and the direction of motion are to face shooting
Machine prospect, then it is assumed that be carpe personage, report to the police, otherwise occurs without effective target, reads next frame.
Step12: output is reported to the police.
The present invention also aims to provide the system of carpe detection in a kind of video image, including:
Background module sets up module, is used for reading in video flowing and using mixed Gaussian background modeling to set up background model;
Foreground point acquisition module, for utilizing background subtraction method and background model to ask for the pixel of mobile object as front
Sight spot;
Foreground area determines module, for foreground point carries out morphology operations, removes noise and fills out hole simultaneously
Fill, obtain UNICOM region and be marked, and using continued labelling extracted region out as final foreground area;
Characteristic extracting module, for carrying out the feature extraction of mobile object in this final foreground area;
Pedestrian detection module, for using adaboost grader to classify the feature extracted, detects foreground area
Whether there is pedestrian;
Face detection module, has in when pedestrian having been detected, utilizes the head and shoulder model inspection preset to go out the head of this pedestrian
Shoulder position, and the face detecting pedestrian on this basis is the most complete;
Direction of motion detection module, for detecting that face is imperfect and after labelling, keeping track of this labelling continuously
The direction of motion of this pedestrian is also detected by pedestrian, and judges that the direction of motion of this pedestrian is to report during predetermined direction
Alert.
The detection method of system of carpe detection and principle in this video image, with embodiments of the invention video image
The Method And Principle of middle carpe detection is identical, and at this, it will not be described.
The present invention, by above technical scheme, can detect the moving object in image also rapidly in video image
Determine whether pedestrian, to detect face, the head and shoulder position detecting pedestrian after being judged as pedestrian the most completely judges that pedestrian is
No carpe the tracking pedestrians direction of motion worn, and report to the police when traffic direction is predetermined warning direction, for regarding
Frequently the masked man that in image, carpe is worn in quick detection provides one and effectively detects recognition methods.
The above is only the preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art
For Yuan, under the premise without departing from the principles of the invention, it is also possible to make some improvements and modifications, these improvements and modifications also should
It is considered as protection scope of the present invention.
Claims (10)
1. the method for carpe detection in a video image, it is characterised in that comprise the following steps:
Read in video flowing and use mixed Gaussian background modeling to set up background model;
Background subtraction method and background model is utilized to ask for the pixel of mobile object as foreground point;
Foreground point is carried out morphology operations, removes noise and hole is filled with simultaneously, obtain UNICOM region and be marked,
And using continued labelling extracted region out as final foreground area;
The foreground area that this is final carries out the feature extraction of mobile object;
Using adaboost grader to classify the feature extracted, whether detection foreground area exists pedestrian;
When pedestrian having been detected, utilize the head and shoulder model inspection preset to go out the head and shoulder position of this pedestrian, and examine on this basis
The face surveying pedestrian is the most complete;
If face is imperfect, labelling also keeps track of the pedestrian of this labelling continuously, detects the direction of motion of this pedestrian,
Judging that the direction of motion of this pedestrian is reported to the police when being predetermined direction.
The method of carpe detection in video image the most according to claim 1, it is characterised in that described in be characterized as that haar is special
Levying, the face of described detection pedestrian the most completely uses eigenface detection method.
The method of carpe detection in video image the most according to claim 1, it is characterised in that described utilize background subtraction
Method and background model ask for the pixel of mobile object: judge current modeling point and corresponding background
Whether model mates, if coupling, current modeling point is background dot, and otherwise current modeling point is foreground point.
The method of carpe detection in video image the most according to claim 2, it is characterised in that described judgement currently models
Put and whether mate employing following steps with corresponding background model:
Judge the variable quantity of the background model that current modeling point is corresponding whether in the range of default matching threshold and or currently build
Whether the average gradient of mould point differs in the range of predetermined percentage threshold with the gradient in background model, if there being one not exist
Then not think and mate.
The method of carpe detection in video image the most according to claim 3, it is characterised in that described matching threshold represents
For:
Thresholdi=(Valuebase+Theta)*Sensitivity
Wherein, ValuebaseBeing basic threshold, Theta is variance, and Sensitivity is sensitivity.
6. the system of carpe detection in a video image, it is characterised in that including:
Background module sets up module, is used for reading in video flowing and using mixed Gaussian background modeling to set up background model;
Foreground point acquisition module, for utilizing background subtraction method and background model to ask for the pixel of mobile object as prospect
Point;
Foreground area determines module, for foreground point carries out morphology operations, removes noise and is filled with hole simultaneously,
To UNICOM region and be marked, and using continued labelling extracted region out as final foreground area;
Characteristic extracting module, for carrying out the feature extraction of mobile object in this final foreground area;
Pedestrian detection module, for using adaboost grader to classify the feature extracted, whether detection foreground area
There is pedestrian;
Face detection module, has in when pedestrian having been detected, utilizes the head and shoulder model inspection preset to go out the head and shoulder position of this pedestrian
Put, and the face detecting pedestrian on this basis is the most complete;
Direction of motion detection module, for detecting that face is imperfect and after labelling, keeping track of the pedestrian of this labelling continuously
And the direction of motion of this pedestrian is detected, and judge that the direction of motion of this pedestrian is to report to the police during predetermined direction.
The system of carpe detection in video image the most according to claim 6, it is characterised in that described in be characterized as that haar is special
Levying, the face of described detection pedestrian the most completely uses eigenface detection method.
The most according to claim 6 the video image culminant star moon pattern detection system, it is characterised in that described utilize background to subtract
Division and background model ask for the pixel of mobile object: judge current modeling point and the corresponding back of the body
Whether scape model mates, if coupling, current modeling point is background dot, and otherwise current modeling point is foreground point.
The system of carpe detection in video image the most according to claim 6, it is characterised in that described judgement currently models
Put and whether mate employing following steps with corresponding background model:
Judge the variable quantity of the background model that current modeling point is corresponding whether in the range of default matching threshold and or currently build
Whether the average gradient of mould point differs in the range of predetermined percentage threshold with the gradient in background model, if there being one not exist
Then not think and mate.
10. according to the system of carpe detection in video image described in any one of claim 6-9, it is characterised in that described
Join threshold value table to be shown as:
Thresholdi=(Valuebase+Theta)*Sensitivity
Wherein, ValuebaseBeing basic threshold, Theta is variance, and Sensitivity is sensitivity.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610356813.3A CN106056060A (en) | 2016-05-26 | 2016-05-26 | Method and system for masked veil detection in video image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610356813.3A CN106056060A (en) | 2016-05-26 | 2016-05-26 | Method and system for masked veil detection in video image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106056060A true CN106056060A (en) | 2016-10-26 |
Family
ID=57175262
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610356813.3A Pending CN106056060A (en) | 2016-05-26 | 2016-05-26 | Method and system for masked veil detection in video image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106056060A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108171842A (en) * | 2017-12-28 | 2018-06-15 | 深圳市泛海三江科技发展有限公司 | A kind of personnel management system |
CN108256404A (en) * | 2016-12-29 | 2018-07-06 | 北京旷视科技有限公司 | Pedestrian detection method and device |
CN111310215A (en) * | 2020-02-26 | 2020-06-19 | 海南大学 | Multilayer digital veil design method for image content safety and privacy protection |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110150337A1 (en) * | 2009-12-17 | 2011-06-23 | National Tsing Hua University | Method and system for automatic figure segmentation |
CN103593672A (en) * | 2013-05-27 | 2014-02-19 | 深圳市智美达科技有限公司 | Adaboost classifier on-line learning method and Adaboost classifier on-line learning system |
CN104616277A (en) * | 2013-11-01 | 2015-05-13 | 深圳中兴力维技术有限公司 | Pedestrian positioning method and device thereof in structural description of video |
CN104657712A (en) * | 2015-02-09 | 2015-05-27 | 惠州学院 | Method for detecting masked person in monitoring video |
CN105160297A (en) * | 2015-07-27 | 2015-12-16 | 华南理工大学 | Masked man event automatic detection method based on skin color characteristics |
US20160140724A1 (en) * | 2014-11-14 | 2016-05-19 | Huawei Technologies Co., Ltd. | Image processing method and apparatus |
-
2016
- 2016-05-26 CN CN201610356813.3A patent/CN106056060A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110150337A1 (en) * | 2009-12-17 | 2011-06-23 | National Tsing Hua University | Method and system for automatic figure segmentation |
CN103593672A (en) * | 2013-05-27 | 2014-02-19 | 深圳市智美达科技有限公司 | Adaboost classifier on-line learning method and Adaboost classifier on-line learning system |
CN104616277A (en) * | 2013-11-01 | 2015-05-13 | 深圳中兴力维技术有限公司 | Pedestrian positioning method and device thereof in structural description of video |
US20160140724A1 (en) * | 2014-11-14 | 2016-05-19 | Huawei Technologies Co., Ltd. | Image processing method and apparatus |
CN104657712A (en) * | 2015-02-09 | 2015-05-27 | 惠州学院 | Method for detecting masked person in monitoring video |
CN105160297A (en) * | 2015-07-27 | 2015-12-16 | 华南理工大学 | Masked man event automatic detection method based on skin color characteristics |
Non-Patent Citations (5)
Title |
---|
张贤达等: "《矩阵论及其工程应用》", 31 December 2015 * |
杨旗: "《人体步态及行为识别技术研究》", 28 February 2014 * |
罗英伟等: "《北京大学计算机科学技术实验教学内容体系》", 31 May 2012 * |
赵春晖等: "《视频图像运动目标分析》", 30 June 2011 * |
韦宏强: "序列图像中弱小运动目标检测方法", 《仪器仪表学报》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108256404A (en) * | 2016-12-29 | 2018-07-06 | 北京旷视科技有限公司 | Pedestrian detection method and device |
CN108171842A (en) * | 2017-12-28 | 2018-06-15 | 深圳市泛海三江科技发展有限公司 | A kind of personnel management system |
CN111310215A (en) * | 2020-02-26 | 2020-06-19 | 海南大学 | Multilayer digital veil design method for image content safety and privacy protection |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109522793B (en) | Method for detecting and identifying abnormal behaviors of multiple persons based on machine vision | |
CN111460962B (en) | Face recognition method and face recognition system for mask | |
CN109670441B (en) | Method, system, terminal and computer readable storage medium for realizing wearing recognition of safety helmet | |
CN101739551B (en) | Method and system for identifying moving objects | |
JP6549797B2 (en) | Method and system for identifying head of passerby | |
CN104778453B (en) | A kind of night pedestrian detection method based on infrared pedestrian's brightness statistics feature | |
CN105160297B (en) | Masked man's event automatic detection method based on features of skin colors | |
CN103246896B (en) | A kind of real-time detection and tracking method of robustness vehicle | |
WO2016015547A1 (en) | Machine vision-based method and system for aircraft docking guidance and aircraft type identification | |
CN106682603B (en) | Real-time driver fatigue early warning system based on multi-source information fusion | |
CN101587544B (en) | Based on the carried on vehicle antitracking device of computer vision | |
CN106682578B (en) | Weak light face recognition method based on blink detection | |
CN107491720A (en) | A kind of model recognizing method based on modified convolutional neural networks | |
CN107301378A (en) | The pedestrian detection method and system of Multi-classifers integrated in image | |
CN101980245B (en) | Adaptive template matching-based passenger flow statistical method | |
CN106022278A (en) | Method and system for detecting people wearing burka in video images | |
CN102214291A (en) | Method for quickly and accurately detecting and tracking human face based on video sequence | |
CN104318263A (en) | Real-time high-precision people stream counting method | |
CN105404857A (en) | Infrared-based night intelligent vehicle front pedestrian detection method | |
CN105868690A (en) | Method and apparatus for identifying mobile phone use behavior of driver | |
CN104156643B (en) | Eye sight-based password inputting method and hardware device thereof | |
CN105893962A (en) | Method for counting passenger flow at airport security check counter | |
CN107139666A (en) | Obstacle detouring identifying system and method | |
CN104239905A (en) | Moving target recognition method and intelligent elevator billing system having moving target recognition function | |
CN103810491A (en) | Head posture estimation interest point detection method fusing depth and gray scale image characteristic points |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20161026 |
|
RJ01 | Rejection of invention patent application after publication |